diff --git a/README.md b/README.md index fd366b8..cca8247 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,14 @@ The purpose of this project is to transfer data (pictures...) via a 2,7kHz SSB channel on the narrow band transponder as fast as possible. # this is work in progress -Version 0.2 is working on my linux PC, Odroid SBC and Raspberry 4 (3B+) +Version 0.2 is working on: +Windows 10 (should work on Win7, not tested) +linux Desktop PC, +Odroid SBC +Raspberry 4 (3B+) # Prerequisites +* Windopws 10 Desktop PC ... working * LINUX Desktop PC ... working * Raspberry PI 4 ... working * Raspberry PI 3B+ ... working, but not 100% error free in fullduplex mode (RX only or TX only is working) @@ -12,28 +17,33 @@ Version 0.2 is working on my linux PC, Odroid SBC and Raspberry 4 (3B+) * Odroid C2 ... working * Odroid C4 ... working -* GNU Radio Version 3.8.x. - * Raspberry: Raspian OS ist NOT working, instead Ubuntu 64bit is required * Application Software "oscardata.exe" running on Windows, Linux, (possibly MAC-OS, not tested) # building the software -1. go into the folder "modem" +* Linux +1. go into the folder "hsmodem" 2. run "make" - +3. the executable is in folder LinuxRelease +* Windows +1. load hsmodem.sln in VisualStudio-19 and build Release version. +2. the executable is in folder WinRelease # starting the modem and application -1. go into the folder "modem" -2. run the software: ./qo100modem -command line parameters: + +you need to run 2 programs, the first one is "hsmodem" which runs in a termimal without GUI. This is the modem doing all modulation and demodulation staff. +The second program is the user interface "oscardata.exe". + +1. go into the folder "WinRelease" or "LinuxRelease" +2. run the software: ./hs100modem.exe or ./hsmodem +optional command line parameter: no parameter ... normal usage -m IP ... specify the V4 IP adress of the device where the application software is running. This is useful if you have more than one qo100modem running simultaneously. Without this parameter the app will search the modem automatically. --e 1 ... do NOT start the GNU Radio files automatically. This is useful if you want to work on the GR Flowgraphs and want to start it manually. 3. start the user application on any PC in your home network. It will find the modem automatically -The file is located in QO-100-modem/oscardata/oscardata/bin/Release +The file is located in oscardata/oscardata/bin/Release On windows just start oscardata.exe On Linux start it with: mono oscardata.exe @@ -42,6 +52,9 @@ On Linux start it with: mono oscardata.exe * QO-100 via IC-9700, IC-7300 or IC-7100 ... working * Short Wave 6m band via IC-7300, IC-7100 ... working. In case of significant noise, use the lowest bit rate (3000 bit/s) +# TODOs +the current version V0.2 runs very fine on Linux but shows a higher bit error rate on Windows. This has to do with the initialisation of the sound card. The default sound bitrate setting in the Windows-Sound-Settings implement some kind of "filtering". This is currently under evaluation. + # usage In the IC-9700 activate the DATA mode and the RX filter FIL1 to full range of 3.6kHz. diff --git a/WinRelease/bass.dll b/WinRelease/bass.dll new file mode 100755 index 0000000..4be4264 Binary files /dev/null and b/WinRelease/bass.dll differ diff --git a/WinRelease/hsmodem.exe b/WinRelease/hsmodem.exe new file mode 100755 index 0000000..e5f27a7 Binary files /dev/null and b/WinRelease/hsmodem.exe differ diff --git a/WinRelease/libfftw3-3.dll b/WinRelease/libfftw3-3.dll new file mode 100755 index 0000000..f5a97b4 Binary files /dev/null and b/WinRelease/libfftw3-3.dll differ diff --git a/WinRelease/libgcc_s_dw2-1.dll b/WinRelease/libgcc_s_dw2-1.dll new file mode 100755 index 0000000..9e32dc2 Binary files /dev/null and b/WinRelease/libgcc_s_dw2-1.dll differ diff --git a/WinRelease/libliquid.dll b/WinRelease/libliquid.dll new file mode 100755 index 0000000..f7eb831 Binary files /dev/null and b/WinRelease/libliquid.dll differ diff --git a/hsmodem.sln b/hsmodem.sln new file mode 100755 index 0000000..1673bde --- /dev/null +++ b/hsmodem.sln @@ -0,0 +1,54 @@ + +Microsoft Visual Studio Solution File, Format Version 12.00 +# Visual Studio Version 16 +VisualStudioVersion = 16.0.30517.126 +MinimumVisualStudioVersion = 10.0.40219.1 +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "hsmodem", "hsmodem\hsmodem.vcxproj", "{E6292FAA-E794-4107-BD89-2310BCDBC858}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + 64bit|Any CPU = 64bit|Any CPU + 64bit|Mixed Platforms = 64bit|Mixed Platforms + 64bit|Win32 = 64bit|Win32 + 64bit|x64 = 64bit|x64 + Debug|Any CPU = Debug|Any CPU + Debug|Mixed Platforms = Debug|Mixed Platforms + Debug|Win32 = Debug|Win32 + Debug|x64 = Debug|x64 + Release|Any CPU = Release|Any CPU + Release|Mixed Platforms = Release|Mixed Platforms + Release|Win32 = Release|Win32 + Release|x64 = Release|x64 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Any CPU.ActiveCfg = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Any CPU.Build.0 = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Mixed Platforms.ActiveCfg = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Mixed Platforms.Build.0 = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Win32.ActiveCfg = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|Win32.Build.0 = 64bit|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|x64.ActiveCfg = 64bit|x64 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.64bit|x64.Build.0 = 64bit|x64 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|Any CPU.ActiveCfg = Debug|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|Mixed Platforms.ActiveCfg = Debug|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|Mixed Platforms.Build.0 = Debug|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|Win32.ActiveCfg = Debug|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|Win32.Build.0 = Debug|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|x64.ActiveCfg = Debug|x64 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Debug|x64.Build.0 = Debug|x64 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Any CPU.ActiveCfg = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Any CPU.Build.0 = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Mixed Platforms.ActiveCfg = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Mixed Platforms.Build.0 = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Win32.ActiveCfg = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|Win32.Build.0 = Release|Win32 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|x64.ActiveCfg = Release|x64 + {E6292FAA-E794-4107-BD89-2310BCDBC858}.Release|x64.Build.0 = Release|x64 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection + GlobalSection(ExtensibilityGlobals) = postSolution + SolutionGuid = {4CD5C45F-4015-48B4-A51D-E5D8D066732E} + EndGlobalSection +EndGlobal diff --git a/hsmodem/Makefile b/hsmodem/Makefile new file mode 100755 index 0000000..0748c20 --- /dev/null +++ b/hsmodem/Makefile @@ -0,0 +1,12 @@ +# makefile for dv_serial + +CXXFLAGS = -Wall -O3 -std=c++0x -Wno-write-strings -Wno-narrowing +LDFLAGS = -lpthread -lrt -lsndfile -lasound -lm -lbass -lfftw3 -lfftw3_threads -lliquid +OBJ = hsmodem.o constellation.o crc16.o frame_packer.o main_helper.o scrambler.o speed.o fec.o audio.o udp.o fft.o liquid_if.o + +default: $(OBJ) + g++ $(CXXFLAGS) -o ../LinuxRelease/hsmodem $(OBJ) $(LDFLAGS) + +clean: + rm -f *.o ../LinuxRelease/hsmodem + diff --git a/hsmodem/audio.cpp b/hsmodem/audio.cpp new file mode 100755 index 0000000..6638cae --- /dev/null +++ b/hsmodem/audio.cpp @@ -0,0 +1,414 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +* audio.c ... functions to handle audio in/out via a soundcard +* uses the "BASS" library +* +* captures samples from the sound card. +* Samples are 32-bit floats in a range of -1 to +1 +* get these samples from the thread safe fifo: cap_read_fifo(&floatvariable) +* +* plays samples to the sound card +* Samples are 32-bit floats in a range of -1 to +1 +* play the samples by calling the thread save function: pb_write_fifo(floatsample) +* +*/ + +#include "hsmodem.h" + +BOOL CALLBACK RecordingCallback(HRECORD handle, const void *buffer, DWORD length, void *user); +DWORD CALLBACK WriteStream(HSTREAM handle, float *buffer, DWORD length, void *user); +int pb_read_fifo(float *data, int elements); +void close_audio(); +void cap_write_fifo(float sample); +int pb_fifo_freespace(int nolock); +void init_pipes(); + +#define CHANNELS 1 // no of channels used + +HRECORD rchan = 0; // recording channel +BASS_INFO info; +HSTREAM stream = 0; + +/*void showDeviceInfo(BASS_DEVICEINFO info) +{ + if (info.flags & BASS_DEVICE_ENABLED) printf("%s\n","BASS_DEVICE_ENABLED "); + if (info.flags & BASS_DEVICE_DEFAULT) printf("%s\n","BASS_DEVICE_DEFAULT "); + if (info.flags & BASS_DEVICE_INIT) printf("%s\n","BASS_DEVICE_INIT "); + if (info.flags & BASS_DEVICE_LOOPBACK) printf("%s\n","BASS_DEVICE_LOOPBACK "); + if (info.flags & BASS_DEVICE_TYPE_DIGITAL) printf("%s\n","BASS_DEVICE_TYPE_DIGITAL "); + if (info.flags & BASS_DEVICE_TYPE_DISPLAYPORT) printf("%s\n","BASS_DEVICE_TYPE_DISPLAYPORT "); + if (info.flags & BASS_DEVICE_TYPE_HANDSET) printf("%s\n","BASS_DEVICE_TYPE_HANDSET "); + if (info.flags & BASS_DEVICE_TYPE_HDMI) printf("%s\n","BASS_DEVICE_TYPE_HDMI "); + if (info.flags & BASS_DEVICE_TYPE_HEADPHONES) printf("%s\n","BASS_DEVICE_TYPE_HEADPHONES "); + if (info.flags & BASS_DEVICE_TYPE_HEADSET) printf("%s\n","BASS_DEVICE_TYPE_HEADSET "); + if (info.flags & BASS_DEVICE_TYPE_LINE) printf("%s\n","BASS_DEVICE_TYPE_LINE "); + if (info.flags & BASS_DEVICE_TYPE_MICROPHONE) printf("%s\n","BASS_DEVICE_TYPE_MICROPHONE "); + if (info.flags & BASS_DEVICE_TYPE_NETWORK) printf("%s\n","BASS_DEVICE_TYPE_NETWORK "); + if (info.flags & BASS_DEVICE_TYPE_SPDIF) printf("%s\n","BASS_DEVICE_TYPE_SPDIF "); + if (info.flags & BASS_DEVICE_TYPE_SPEAKERS) printf("%s\n","BASS_DEVICE_TYPE_SPEAKERS "); + +}*/ + +#define MAXDEVSTRLEN 2000 +uint8_t devstring[MAXDEVSTRLEN +100]; +char PBdevs[100][256]; // stores the device names, just for diagnosis, has no real fuction +char CAPdevs[100][256]; + +// build string of audio devices, to be sent to application as response to Broadcast search +void enumerateAudioDevices() +{ + memset(devstring, 0, sizeof(devstring)); + devstring[0] = 3; // ID for this UDP message + + // playback devices + int a; + int idx = 0; + BASS_DEVICEINFO info; + + strcat((char*)(devstring + 1), "System Default"); + strcat((char*)(devstring + 1), "~"); + strcpy(PBdevs[idx++], "System Default"); + + for (a = 1; BASS_GetDeviceInfo(a, &info); a++) + { + printf("PB device:%d = %s\n", a, info.name); + if (strlen((char*)(devstring+1)) > MAXDEVSTRLEN) break; + if (info.flags & BASS_DEVICE_ENABLED) + { + strncpy(PBdevs[idx], info.name, 255); + PBdevs[idx][255] = 0; + idx++; + strcat((char*)(devstring + 1), info.name); + strcat((char*)(devstring + 1), "~"); // audio device separator + } + } + + strcat((char*)(devstring + 1), "^"); // PB, CAP separator + + // capture devices + BASS_DEVICEINFO recinfo; + idx = 0; + + strcat((char*)(devstring + 1), "System Default"); + strcat((char*)(devstring + 1), "~"); + strcpy(CAPdevs[idx++], "System Default"); + + for (a = 0; BASS_RecordGetDeviceInfo(a, &recinfo); a++) + { + printf("CAP device:%d = %s\n", a, recinfo.name); + if (strlen((char*)(devstring + 1)) > MAXDEVSTRLEN) break; + if (recinfo.flags & BASS_DEVICE_ENABLED) + { + strncpy(CAPdevs[idx], recinfo.name, 255); + CAPdevs[idx][255] = 0; + idx++; + strcat((char*)(devstring + 1), recinfo.name); + strcat((char*)(devstring + 1), "~"); + } + } +} + +/* +* Audio Device numbering: +* +* Playback: +* 0 ... no audio, we use 0 for default, which is -1 +* 1 ... audio devices +* +* Record: +* 0 ... audio devices +* we insert "Default" at position 0, and let the audio devices start with 1, so we are compatible with playback +* but in init_audio() we have to substract 1 +*/ + +uint8_t* getAudioDevicelist(int *len) +{ + *len = strlen((char*)(devstring+1))+1; + return devstring; +} + +// pbdev, capdev: -1=default device +int init_audio(int pbdev, int capdev) +{ + static int f = 1; + int ocd = capdev; + + // PB devices start with 1 (0 not used, but here used for Default which is -1) + if (pbdev == 255 || pbdev == 0) pbdev = -1; + + // CAP devices start with 0, but we use 0 for Default (-1) + // so we have to substract 1 from the real devices + if (capdev == 255 || capdev == 0 || capdev == -1) capdev = -1; + else capdev--; + + if (f == 1) + { + f = 0; + enumerateAudioDevices(); + init_pipes(); + } + + close_audio(); + + printf("init audio, caprate:%d\n",caprate); + if (pbdev != -1) + printf("playback device %d: %s\n", pbdev, PBdevs[pbdev]); + else + printf("playback device %d: %s\n", pbdev, "Default"); + + if (capdev != -1) + printf("capture device %d: %s\n", capdev, CAPdevs[ocd]); + else + printf("capture device %d: %s\n", capdev, "Default"); + + // check the correct BASS was loaded + if (HIWORD(BASS_GetVersion()) != BASSVERSION) + { + printf("An incorrect version of BASS was loaded\n"); + return -1; + } + + // initalize default recording device + if (!BASS_RecordInit(capdev)) + { + printf("Can't initialize recording device: %d\n", BASS_ErrorGetCode()); + return -1; + } + + // initialize default output device + if (!BASS_Init(pbdev, caprate, 0, NULL, NULL)) + { + printf("Can't initialize output device\n"); + return -1; + } + + // set capture callback + rchan = BASS_RecordStart(caprate, CHANNELS, BASS_SAMPLE_FLOAT, RecordingCallback, 0); + if (!rchan) { + printf("Can't start capturing: %d\n", BASS_ErrorGetCode()); + return -1; + } + + // set play callback + BASS_GetInfo(&info); + stream = BASS_StreamCreate(info.freq, CHANNELS, BASS_SAMPLE_FLOAT, (STREAMPROC*)WriteStream, 0); // sample: 32 bit float + BASS_ChannelSetAttribute(stream, BASS_ATTRIB_BUFFER, 0); // no buffering for minimum latency + BASS_ChannelPlay(stream, FALSE); // start it + + printf("audio initialized\n"); + + return 0; +} + +void close_audio() +{ + if(stream != 0) + { + printf("!close Audio Devices\n"); + BASS_ChannelStop(rchan); + int rr = BASS_RecordFree(); + if (!rr) printf("Bass_RecordFree error: %d\n", BASS_ErrorGetCode()); + + BASS_StreamFree(stream); + int r = BASS_Free(); + if(!r) printf("Bass_Free error: %d\n", BASS_ErrorGetCode()); + stream = 0; + } +} + +// capture callback +// length: bytes. short=2byte, 2channels, so it requests samples*4 +BOOL CALLBACK RecordingCallback(HRECORD handle, const void *buffer, DWORD length, void *user) +{ + //printf("captured %ld samples\n",length/sizeof(float)); + //measure_speed(length/sizeof(float)); + + float *fbuffer = (float *)buffer; + //showbytestringf((char*)"cap:", fbuffer, 10); + //printf("w:%ld ",length/sizeof(float)); + for(unsigned int i=0; i<(length/sizeof(float)); i+=CHANNELS) + { + //printf("%f\n",fbuffer[i]); + cap_write_fifo(fbuffer[i]); + } + + return TRUE; // continue recording +} + +// play callback +// length: bytes. float=4byte, 2channels, so it requests samples*8 +DWORD CALLBACK WriteStream(HSTREAM handle, float *buffer, DWORD length, void *user) +{ + //printf("requested %ld samples\n", length / sizeof(float)); + int ret = pb_read_fifo(buffer, length / sizeof(float)); + if(ret == 0) + { + // fifo empty, send 00 + memset(buffer,0,length); + } + return length; +} + +// ================ thread safe fifo for audio callback routines =============== + +#ifdef _WIN32_ +CRITICAL_SECTION cap_crit_sec; +CRITICAL_SECTION pb_crit_sec; +#define CAP_LOCK EnterCriticalSection(&cap_crit_sec) +#define PB_LOCK EnterCriticalSection(&pb_crit_sec) +void CAP_UNLOCK() +{ + if (&cap_crit_sec != NULL) + LeaveCriticalSection(&cap_crit_sec); +} +void PB_UNLOCK() +{ + if (&pb_crit_sec != NULL) + LeaveCriticalSection(&pb_crit_sec); +} +#endif + +#ifdef _LINUX_ +pthread_mutex_t cap_crit_sec; +pthread_mutex_t pb_crit_sec; +#define CAP_LOCK pthread_mutex_lock(&cap_crit_sec) +void CAP_UNLOCK() { pthread_mutex_unlock(&cap_crit_sec); } +#define PB_LOCK pthread_mutex_lock(&pb_crit_sec) +void PB_UNLOCK() { pthread_mutex_unlock(&pb_crit_sec); } +#endif + +#define AUDIO_BUFFERMAXTIME 2 // fifo can buffer this time in [s] +#define AUDIO_PLAYBACK_BUFLEN (48000 * 10) // space for 10 seconds of samples +#define AUDIO_CAPTURE_BUFLEN (48000 * 10) + +int cap_wridx=0; +int cap_rdidx=0; +float cap_buffer[AUDIO_CAPTURE_BUFLEN]; + +int pb_wridx=0; +int pb_rdidx=0; +float pb_buffer[AUDIO_PLAYBACK_BUFLEN]; + +void init_pipes() +{ +#ifdef _WIN32_ + if (&cap_crit_sec != NULL) DeleteCriticalSection(&cap_crit_sec); + InitializeCriticalSection(&cap_crit_sec); + + if (&pb_crit_sec != NULL) DeleteCriticalSection(&pb_crit_sec); + InitializeCriticalSection(&pb_crit_sec); +#endif +} + +// write one sample into the fifo +// overwrite old data if the fifo is full +void cap_write_fifo(float sample) +{ + CAP_LOCK; + cap_buffer[cap_wridx] = sample; + if(++cap_wridx >= AUDIO_CAPTURE_BUFLEN) cap_wridx = 0; + CAP_UNLOCK(); +} + +int cap_read_fifo(float *data) +{ + CAP_LOCK; + + if (cap_rdidx == cap_wridx) + { + // Fifo empty, no data available + CAP_UNLOCK(); + return 0; + } + + *data = cap_buffer[cap_rdidx]; + if(++cap_rdidx >= AUDIO_CAPTURE_BUFLEN) cap_rdidx = 0; + CAP_UNLOCK(); + + return 1; +} + +void pb_write_fifo(float sample) +{ + PB_LOCK; + + // check if there is free space in fifo + if(pb_fifo_freespace(1) == 0) + { + PB_UNLOCK(); + printf("************* pb fifo full\n"); + return; + } + + pb_buffer[pb_wridx] = sample; + if(++pb_wridx >= AUDIO_PLAYBACK_BUFLEN) pb_wridx = 0; + PB_UNLOCK(); + //printf("write: pbw:%d pbr:%d\n",pb_wridx,pb_rdidx); +} + +void pb_write_fifo_clear() +{ + pb_wridx = pb_rdidx = 0; +} + +int pb_fifo_freespace(int nolock) +{ +int freebuf = 0; + + if(nolock == 0) PB_LOCK; + + int elemInFifo = (pb_wridx + AUDIO_PLAYBACK_BUFLEN - pb_rdidx) % AUDIO_PLAYBACK_BUFLEN; + freebuf = AUDIO_PLAYBACK_BUFLEN - elemInFifo; + + if(nolock == 0) PB_UNLOCK(); + + //printf("fifolen:%d check: pbw:%d pbr:%d freebuf:%d\n",AUDIO_PLAYBACK_BUFLEN,pb_wridx,pb_rdidx,freebuf); + + return freebuf; +} + +// read elements floats from fifo or return 0 if not enough floats are available +int pb_read_fifo(float *data, int elements) +{ + //printf("pb read fifo: %d\n",elements); + PB_LOCK; + + int e = AUDIO_PLAYBACK_BUFLEN - pb_fifo_freespace(1); + if(e < elements) + { + // Fifo empty, no data available + PB_UNLOCK(); + //printf("pb fifo empty, need:%d have:%d size:%d\n",elements,e,AUDIO_PLAYBACK_BUFLEN); + return 0; + } + + for(int i=0; i= AUDIO_PLAYBACK_BUFLEN) pb_rdidx = 0; + } + //printf("read %d floats\n",elements); + + PB_UNLOCK(); + return 1; +} diff --git a/hsmodem/bass.h b/hsmodem/bass.h new file mode 100755 index 0000000..901f9a2 --- /dev/null +++ b/hsmodem/bass.h @@ -0,0 +1,1160 @@ +/* + BASS 2.4 C/C++ header file + Copyright (c) 1999-2019 Un4seen Developments Ltd. + + See the BASS.CHM file for more detailed documentation +*/ + +#ifndef BASS_H +#define BASS_H + +#ifdef _WIN32 +#include +typedef unsigned __int64 QWORD; +#else +#include +#define WINAPI +#define CALLBACK +typedef uint8_t BYTE; +typedef uint16_t WORD; +typedef uint32_t DWORD; +typedef uint64_t QWORD; +#ifdef __OBJC__ +#include +#else +typedef int BOOL; +#endif +#ifndef TRUE +#define TRUE 1 +#define FALSE 0 +#endif +#define LOBYTE(a) (BYTE)(a) +#define HIBYTE(a) (BYTE)((a)>>8) +#define LOWORD(a) (WORD)(a) +#define HIWORD(a) (WORD)((a)>>16) +#define MAKEWORD(a,b) (WORD)(((a)&0xff)|((b)<<8)) +#define MAKELONG(a,b) (DWORD)(((a)&0xffff)|((b)<<16)) +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +#define BASSVERSION 0x204 // API version +#define BASSVERSIONTEXT "2.4" + +#ifndef BASSDEF +#define BASSDEF(f) WINAPI f +#else +#define NOBASSOVERLOADS +#endif + +typedef DWORD HMUSIC; // MOD music handle +typedef DWORD HSAMPLE; // sample handle +typedef DWORD HCHANNEL; // playing sample's channel handle +typedef DWORD HSTREAM; // sample stream handle +typedef DWORD HRECORD; // recording handle +typedef DWORD HSYNC; // synchronizer handle +typedef DWORD HDSP; // DSP handle +typedef DWORD HFX; // DX8 effect handle +typedef DWORD HPLUGIN; // Plugin handle + +// Error codes returned by BASS_ErrorGetCode +#define BASS_OK 0 // all is OK +#define BASS_ERROR_MEM 1 // memory error +#define BASS_ERROR_FILEOPEN 2 // can't open the file +#define BASS_ERROR_DRIVER 3 // can't find a free/valid driver +#define BASS_ERROR_BUFLOST 4 // the sample buffer was lost +#define BASS_ERROR_HANDLE 5 // invalid handle +#define BASS_ERROR_FORMAT 6 // unsupported sample format +#define BASS_ERROR_POSITION 7 // invalid position +#define BASS_ERROR_INIT 8 // BASS_Init has not been successfully called +#define BASS_ERROR_START 9 // BASS_Start has not been successfully called +#define BASS_ERROR_SSL 10 // SSL/HTTPS support isn't available +#define BASS_ERROR_ALREADY 14 // already initialized/paused/whatever +#define BASS_ERROR_NOTAUDIO 17 // file does not contain audio +#define BASS_ERROR_NOCHAN 18 // can't get a free channel +#define BASS_ERROR_ILLTYPE 19 // an illegal type was specified +#define BASS_ERROR_ILLPARAM 20 // an illegal parameter was specified +#define BASS_ERROR_NO3D 21 // no 3D support +#define BASS_ERROR_NOEAX 22 // no EAX support +#define BASS_ERROR_DEVICE 23 // illegal device number +#define BASS_ERROR_NOPLAY 24 // not playing +#define BASS_ERROR_FREQ 25 // illegal sample rate +#define BASS_ERROR_NOTFILE 27 // the stream is not a file stream +#define BASS_ERROR_NOHW 29 // no hardware voices available +#define BASS_ERROR_EMPTY 31 // the MOD music has no sequence data +#define BASS_ERROR_NONET 32 // no internet connection could be opened +#define BASS_ERROR_CREATE 33 // couldn't create the file +#define BASS_ERROR_NOFX 34 // effects are not available +#define BASS_ERROR_NOTAVAIL 37 // requested data/action is not available +#define BASS_ERROR_DECODE 38 // the channel is/isn't a "decoding channel" +#define BASS_ERROR_DX 39 // a sufficient DirectX version is not installed +#define BASS_ERROR_TIMEOUT 40 // connection timedout +#define BASS_ERROR_FILEFORM 41 // unsupported file format +#define BASS_ERROR_SPEAKER 42 // unavailable speaker +#define BASS_ERROR_VERSION 43 // invalid BASS version (used by add-ons) +#define BASS_ERROR_CODEC 44 // codec is not available/supported +#define BASS_ERROR_ENDED 45 // the channel/file has ended +#define BASS_ERROR_BUSY 46 // the device is busy +#define BASS_ERROR_UNSTREAMABLE 47 // unstreamable file +#define BASS_ERROR_UNKNOWN -1 // some other mystery problem + +// BASS_SetConfig options +#define BASS_CONFIG_BUFFER 0 +#define BASS_CONFIG_UPDATEPERIOD 1 +#define BASS_CONFIG_GVOL_SAMPLE 4 +#define BASS_CONFIG_GVOL_STREAM 5 +#define BASS_CONFIG_GVOL_MUSIC 6 +#define BASS_CONFIG_CURVE_VOL 7 +#define BASS_CONFIG_CURVE_PAN 8 +#define BASS_CONFIG_FLOATDSP 9 +#define BASS_CONFIG_3DALGORITHM 10 +#define BASS_CONFIG_NET_TIMEOUT 11 +#define BASS_CONFIG_NET_BUFFER 12 +#define BASS_CONFIG_PAUSE_NOPLAY 13 +#define BASS_CONFIG_NET_PREBUF 15 +#define BASS_CONFIG_NET_PASSIVE 18 +#define BASS_CONFIG_REC_BUFFER 19 +#define BASS_CONFIG_NET_PLAYLIST 21 +#define BASS_CONFIG_MUSIC_VIRTUAL 22 +#define BASS_CONFIG_VERIFY 23 +#define BASS_CONFIG_UPDATETHREADS 24 +#define BASS_CONFIG_DEV_BUFFER 27 +#define BASS_CONFIG_REC_LOOPBACK 28 +#define BASS_CONFIG_VISTA_TRUEPOS 30 +#define BASS_CONFIG_IOS_SESSION 34 +#define BASS_CONFIG_IOS_MIXAUDIO 34 +#define BASS_CONFIG_DEV_DEFAULT 36 +#define BASS_CONFIG_NET_READTIMEOUT 37 +#define BASS_CONFIG_VISTA_SPEAKERS 38 +#define BASS_CONFIG_IOS_SPEAKER 39 +#define BASS_CONFIG_MF_DISABLE 40 +#define BASS_CONFIG_HANDLES 41 +#define BASS_CONFIG_UNICODE 42 +#define BASS_CONFIG_SRC 43 +#define BASS_CONFIG_SRC_SAMPLE 44 +#define BASS_CONFIG_ASYNCFILE_BUFFER 45 +#define BASS_CONFIG_OGG_PRESCAN 47 +#define BASS_CONFIG_MF_VIDEO 48 +#define BASS_CONFIG_AIRPLAY 49 +#define BASS_CONFIG_DEV_NONSTOP 50 +#define BASS_CONFIG_IOS_NOCATEGORY 51 +#define BASS_CONFIG_VERIFY_NET 52 +#define BASS_CONFIG_DEV_PERIOD 53 +#define BASS_CONFIG_FLOAT 54 +#define BASS_CONFIG_NET_SEEK 56 +#define BASS_CONFIG_AM_DISABLE 58 +#define BASS_CONFIG_NET_PLAYLIST_DEPTH 59 +#define BASS_CONFIG_NET_PREBUF_WAIT 60 +#define BASS_CONFIG_ANDROID_SESSIONID 62 +#define BASS_CONFIG_WASAPI_PERSIST 65 +#define BASS_CONFIG_REC_WASAPI 66 +#define BASS_CONFIG_ANDROID_AAUDIO 67 + +// BASS_SetConfigPtr options +#define BASS_CONFIG_NET_AGENT 16 +#define BASS_CONFIG_NET_PROXY 17 +#define BASS_CONFIG_IOS_NOTIFY 46 +#define BASS_CONFIG_LIBSSL 64 + +// BASS_CONFIG_IOS_SESSION flags +#define BASS_IOS_SESSION_MIX 1 +#define BASS_IOS_SESSION_DUCK 2 +#define BASS_IOS_SESSION_AMBIENT 4 +#define BASS_IOS_SESSION_SPEAKER 8 +#define BASS_IOS_SESSION_DISABLE 16 + +// BASS_Init flags +#define BASS_DEVICE_8BITS 1 // 8 bit +#define BASS_DEVICE_MONO 2 // mono +#define BASS_DEVICE_3D 4 // enable 3D functionality +#define BASS_DEVICE_16BITS 8 // limit output to 16 bit +#define BASS_DEVICE_LATENCY 0x100 // calculate device latency (BASS_INFO struct) +#define BASS_DEVICE_CPSPEAKERS 0x400 // detect speakers via Windows control panel +#define BASS_DEVICE_SPEAKERS 0x800 // force enabling of speaker assignment +#define BASS_DEVICE_NOSPEAKER 0x1000 // ignore speaker arrangement +#define BASS_DEVICE_DMIX 0x2000 // use ALSA "dmix" plugin +#define BASS_DEVICE_FREQ 0x4000 // set device sample rate +#define BASS_DEVICE_STEREO 0x8000 // limit output to stereo +#define BASS_DEVICE_HOG 0x10000 // hog/exclusive mode +#define BASS_DEVICE_AUDIOTRACK 0x20000 // use AudioTrack output +#define BASS_DEVICE_DSOUND 0x40000 // use DirectSound output + +// DirectSound interfaces (for use with BASS_GetDSoundObject) +#define BASS_OBJECT_DS 1 // IDirectSound +#define BASS_OBJECT_DS3DL 2 // IDirectSound3DListener + +// Device info structure +typedef struct { +#if defined(_WIN32_WCE) || (WINAPI_FAMILY && WINAPI_FAMILY!=WINAPI_FAMILY_DESKTOP_APP) + const wchar_t *name; // description + const wchar_t *driver; // driver +#else + const char *name; // description + const char *driver; // driver +#endif + DWORD flags; +} BASS_DEVICEINFO; + +// BASS_DEVICEINFO flags +#define BASS_DEVICE_ENABLED 1 +#define BASS_DEVICE_DEFAULT 2 +#define BASS_DEVICE_INIT 4 +#define BASS_DEVICE_LOOPBACK 8 + +#define BASS_DEVICE_TYPE_MASK 0xff000000 +#define BASS_DEVICE_TYPE_NETWORK 0x01000000 +#define BASS_DEVICE_TYPE_SPEAKERS 0x02000000 +#define BASS_DEVICE_TYPE_LINE 0x03000000 +#define BASS_DEVICE_TYPE_HEADPHONES 0x04000000 +#define BASS_DEVICE_TYPE_MICROPHONE 0x05000000 +#define BASS_DEVICE_TYPE_HEADSET 0x06000000 +#define BASS_DEVICE_TYPE_HANDSET 0x07000000 +#define BASS_DEVICE_TYPE_DIGITAL 0x08000000 +#define BASS_DEVICE_TYPE_SPDIF 0x09000000 +#define BASS_DEVICE_TYPE_HDMI 0x0a000000 +#define BASS_DEVICE_TYPE_DISPLAYPORT 0x40000000 + +// BASS_GetDeviceInfo flags +#define BASS_DEVICES_AIRPLAY 0x1000000 + +typedef struct { + DWORD flags; // device capabilities (DSCAPS_xxx flags) + DWORD hwsize; // size of total device hardware memory + DWORD hwfree; // size of free device hardware memory + DWORD freesam; // number of free sample slots in the hardware + DWORD free3d; // number of free 3D sample slots in the hardware + DWORD minrate; // min sample rate supported by the hardware + DWORD maxrate; // max sample rate supported by the hardware + BOOL eax; // device supports EAX? (always FALSE if BASS_DEVICE_3D was not used) + DWORD minbuf; // recommended minimum buffer length in ms (requires BASS_DEVICE_LATENCY) + DWORD dsver; // DirectSound version + DWORD latency; // delay (in ms) before start of playback (requires BASS_DEVICE_LATENCY) + DWORD initflags; // BASS_Init "flags" parameter + DWORD speakers; // number of speakers available + DWORD freq; // current output rate +} BASS_INFO; + +// BASS_INFO flags (from DSOUND.H) +#define DSCAPS_CONTINUOUSRATE 0x00000010 // supports all sample rates between min/maxrate +#define DSCAPS_EMULDRIVER 0x00000020 // device does NOT have hardware DirectSound support +#define DSCAPS_CERTIFIED 0x00000040 // device driver has been certified by Microsoft +#define DSCAPS_SECONDARYMONO 0x00000100 // mono +#define DSCAPS_SECONDARYSTEREO 0x00000200 // stereo +#define DSCAPS_SECONDARY8BIT 0x00000400 // 8 bit +#define DSCAPS_SECONDARY16BIT 0x00000800 // 16 bit + +// Recording device info structure +typedef struct { + DWORD flags; // device capabilities (DSCCAPS_xxx flags) + DWORD formats; // supported standard formats (WAVE_FORMAT_xxx flags) + DWORD inputs; // number of inputs + BOOL singlein; // TRUE = only 1 input can be set at a time + DWORD freq; // current input rate +} BASS_RECORDINFO; + +// BASS_RECORDINFO flags (from DSOUND.H) +#define DSCCAPS_EMULDRIVER DSCAPS_EMULDRIVER // device does NOT have hardware DirectSound recording support +#define DSCCAPS_CERTIFIED DSCAPS_CERTIFIED // device driver has been certified by Microsoft + +// defines for formats field of BASS_RECORDINFO (from MMSYSTEM.H) +#ifndef WAVE_FORMAT_1M08 +#define WAVE_FORMAT_1M08 0x00000001 /* 11.025 kHz, Mono, 8-bit */ +#define WAVE_FORMAT_1S08 0x00000002 /* 11.025 kHz, Stereo, 8-bit */ +#define WAVE_FORMAT_1M16 0x00000004 /* 11.025 kHz, Mono, 16-bit */ +#define WAVE_FORMAT_1S16 0x00000008 /* 11.025 kHz, Stereo, 16-bit */ +#define WAVE_FORMAT_2M08 0x00000010 /* 22.05 kHz, Mono, 8-bit */ +#define WAVE_FORMAT_2S08 0x00000020 /* 22.05 kHz, Stereo, 8-bit */ +#define WAVE_FORMAT_2M16 0x00000040 /* 22.05 kHz, Mono, 16-bit */ +#define WAVE_FORMAT_2S16 0x00000080 /* 22.05 kHz, Stereo, 16-bit */ +#define WAVE_FORMAT_4M08 0x00000100 /* 44.1 kHz, Mono, 8-bit */ +#define WAVE_FORMAT_4S08 0x00000200 /* 44.1 kHz, Stereo, 8-bit */ +#define WAVE_FORMAT_4M16 0x00000400 /* 44.1 kHz, Mono, 16-bit */ +#define WAVE_FORMAT_4S16 0x00000800 /* 44.1 kHz, Stereo, 16-bit */ +#endif + +// Sample info structure +typedef struct { + DWORD freq; // default playback rate + float volume; // default volume (0-1) + float pan; // default pan (-1=left, 0=middle, 1=right) + DWORD flags; // BASS_SAMPLE_xxx flags + DWORD length; // length (in bytes) + DWORD max; // maximum simultaneous playbacks + DWORD origres; // original resolution + DWORD chans; // number of channels + DWORD mingap; // minimum gap (ms) between creating channels + DWORD mode3d; // BASS_3DMODE_xxx mode + float mindist; // minimum distance + float maxdist; // maximum distance + DWORD iangle; // angle of inside projection cone + DWORD oangle; // angle of outside projection cone + float outvol; // delta-volume outside the projection cone + DWORD vam; // voice allocation/management flags (BASS_VAM_xxx) + DWORD priority; // priority (0=lowest, 0xffffffff=highest) +} BASS_SAMPLE; + +#define BASS_SAMPLE_8BITS 1 // 8 bit +#define BASS_SAMPLE_FLOAT 256 // 32 bit floating-point +#define BASS_SAMPLE_MONO 2 // mono +#define BASS_SAMPLE_LOOP 4 // looped +#define BASS_SAMPLE_3D 8 // 3D functionality +#define BASS_SAMPLE_SOFTWARE 16 // not using hardware mixing +#define BASS_SAMPLE_MUTEMAX 32 // mute at max distance (3D only) +#define BASS_SAMPLE_VAM 64 // DX7 voice allocation & management +#define BASS_SAMPLE_FX 128 // old implementation of DX8 effects +#define BASS_SAMPLE_OVER_VOL 0x10000 // override lowest volume +#define BASS_SAMPLE_OVER_POS 0x20000 // override longest playing +#define BASS_SAMPLE_OVER_DIST 0x30000 // override furthest from listener (3D only) + +#define BASS_STREAM_PRESCAN 0x20000 // enable pin-point seeking/length (MP3/MP2/MP1) +#define BASS_STREAM_AUTOFREE 0x40000 // automatically free the stream when it stop/ends +#define BASS_STREAM_RESTRATE 0x80000 // restrict the download rate of internet file streams +#define BASS_STREAM_BLOCK 0x100000 // download/play internet file stream in small blocks +#define BASS_STREAM_DECODE 0x200000 // don't play the stream, only decode (BASS_ChannelGetData) +#define BASS_STREAM_STATUS 0x800000 // give server status info (HTTP/ICY tags) in DOWNLOADPROC + +#define BASS_MP3_IGNOREDELAY 0x200 // ignore LAME/Xing/VBRI/iTunes delay & padding info +#define BASS_MP3_SETPOS BASS_STREAM_PRESCAN + +#define BASS_MUSIC_FLOAT BASS_SAMPLE_FLOAT +#define BASS_MUSIC_MONO BASS_SAMPLE_MONO +#define BASS_MUSIC_LOOP BASS_SAMPLE_LOOP +#define BASS_MUSIC_3D BASS_SAMPLE_3D +#define BASS_MUSIC_FX BASS_SAMPLE_FX +#define BASS_MUSIC_AUTOFREE BASS_STREAM_AUTOFREE +#define BASS_MUSIC_DECODE BASS_STREAM_DECODE +#define BASS_MUSIC_PRESCAN BASS_STREAM_PRESCAN // calculate playback length +#define BASS_MUSIC_CALCLEN BASS_MUSIC_PRESCAN +#define BASS_MUSIC_RAMP 0x200 // normal ramping +#define BASS_MUSIC_RAMPS 0x400 // sensitive ramping +#define BASS_MUSIC_SURROUND 0x800 // surround sound +#define BASS_MUSIC_SURROUND2 0x1000 // surround sound (mode 2) +#define BASS_MUSIC_FT2PAN 0x2000 // apply FastTracker 2 panning to XM files +#define BASS_MUSIC_FT2MOD 0x2000 // play .MOD as FastTracker 2 does +#define BASS_MUSIC_PT1MOD 0x4000 // play .MOD as ProTracker 1 does +#define BASS_MUSIC_NONINTER 0x10000 // non-interpolated sample mixing +#define BASS_MUSIC_SINCINTER 0x800000 // sinc interpolated sample mixing +#define BASS_MUSIC_POSRESET 0x8000 // stop all notes when moving position +#define BASS_MUSIC_POSRESETEX 0x400000 // stop all notes and reset bmp/etc when moving position +#define BASS_MUSIC_STOPBACK 0x80000 // stop the music on a backwards jump effect +#define BASS_MUSIC_NOSAMPLE 0x100000 // don't load the samples + +// Speaker assignment flags +#define BASS_SPEAKER_FRONT 0x1000000 // front speakers +#define BASS_SPEAKER_REAR 0x2000000 // rear/side speakers +#define BASS_SPEAKER_CENLFE 0x3000000 // center & LFE speakers (5.1) +#define BASS_SPEAKER_REAR2 0x4000000 // rear center speakers (7.1) +#define BASS_SPEAKER_N(n) ((n)<<24) // n'th pair of speakers (max 15) +#define BASS_SPEAKER_LEFT 0x10000000 // modifier: left +#define BASS_SPEAKER_RIGHT 0x20000000 // modifier: right +#define BASS_SPEAKER_FRONTLEFT BASS_SPEAKER_FRONT|BASS_SPEAKER_LEFT +#define BASS_SPEAKER_FRONTRIGHT BASS_SPEAKER_FRONT|BASS_SPEAKER_RIGHT +#define BASS_SPEAKER_REARLEFT BASS_SPEAKER_REAR|BASS_SPEAKER_LEFT +#define BASS_SPEAKER_REARRIGHT BASS_SPEAKER_REAR|BASS_SPEAKER_RIGHT +#define BASS_SPEAKER_CENTER BASS_SPEAKER_CENLFE|BASS_SPEAKER_LEFT +#define BASS_SPEAKER_LFE BASS_SPEAKER_CENLFE|BASS_SPEAKER_RIGHT +#define BASS_SPEAKER_REAR2LEFT BASS_SPEAKER_REAR2|BASS_SPEAKER_LEFT +#define BASS_SPEAKER_REAR2RIGHT BASS_SPEAKER_REAR2|BASS_SPEAKER_RIGHT + +#define BASS_ASYNCFILE 0x40000000 +#define BASS_UNICODE 0x80000000 + +#define BASS_RECORD_PAUSE 0x8000 // start recording paused +#define BASS_RECORD_ECHOCANCEL 0x2000 +#define BASS_RECORD_AGC 0x4000 + +// DX7 voice allocation & management flags +#define BASS_VAM_HARDWARE 1 +#define BASS_VAM_SOFTWARE 2 +#define BASS_VAM_TERM_TIME 4 +#define BASS_VAM_TERM_DIST 8 +#define BASS_VAM_TERM_PRIO 16 + +// Channel info structure +typedef struct { + DWORD freq; // default playback rate + DWORD chans; // channels + DWORD flags; // BASS_SAMPLE/STREAM/MUSIC/SPEAKER flags + DWORD ctype; // type of channel + DWORD origres; // original resolution + HPLUGIN plugin; // plugin + HSAMPLE sample; // sample + const char *filename; // filename +} BASS_CHANNELINFO; + +#define BASS_ORIGRES_FLOAT 0x10000 + +// BASS_CHANNELINFO types +#define BASS_CTYPE_SAMPLE 1 +#define BASS_CTYPE_RECORD 2 +#define BASS_CTYPE_STREAM 0x10000 +#define BASS_CTYPE_STREAM_VORBIS 0x10002 +#define BASS_CTYPE_STREAM_OGG 0x10002 +#define BASS_CTYPE_STREAM_MP1 0x10003 +#define BASS_CTYPE_STREAM_MP2 0x10004 +#define BASS_CTYPE_STREAM_MP3 0x10005 +#define BASS_CTYPE_STREAM_AIFF 0x10006 +#define BASS_CTYPE_STREAM_CA 0x10007 +#define BASS_CTYPE_STREAM_MF 0x10008 +#define BASS_CTYPE_STREAM_AM 0x10009 +#define BASS_CTYPE_STREAM_DUMMY 0x18000 +#define BASS_CTYPE_STREAM_DEVICE 0x18001 +#define BASS_CTYPE_STREAM_WAV 0x40000 // WAVE flag, LOWORD=codec +#define BASS_CTYPE_STREAM_WAV_PCM 0x50001 +#define BASS_CTYPE_STREAM_WAV_FLOAT 0x50003 +#define BASS_CTYPE_MUSIC_MOD 0x20000 +#define BASS_CTYPE_MUSIC_MTM 0x20001 +#define BASS_CTYPE_MUSIC_S3M 0x20002 +#define BASS_CTYPE_MUSIC_XM 0x20003 +#define BASS_CTYPE_MUSIC_IT 0x20004 +#define BASS_CTYPE_MUSIC_MO3 0x00100 // MO3 flag + +typedef struct { + DWORD ctype; // channel type +#if defined(_WIN32_WCE) || (WINAPI_FAMILY && WINAPI_FAMILY!=WINAPI_FAMILY_DESKTOP_APP) + const wchar_t *name; // format description + const wchar_t *exts; // file extension filter (*.ext1;*.ext2;etc...) +#else + const char *name; // format description + const char *exts; // file extension filter (*.ext1;*.ext2;etc...) +#endif +} BASS_PLUGINFORM; + +typedef struct { + DWORD version; // version (same form as BASS_GetVersion) + DWORD formatc; // number of formats + const BASS_PLUGINFORM *formats; // the array of formats +} BASS_PLUGININFO; + +// 3D vector (for 3D positions/velocities/orientations) +typedef struct BASS_3DVECTOR { +#ifdef __cplusplus + BASS_3DVECTOR() {}; + BASS_3DVECTOR(float _x, float _y, float _z) : x(_x), y(_y), z(_z) {}; +#endif + float x; // +=right, -=left + float y; // +=up, -=down + float z; // +=front, -=behind +} BASS_3DVECTOR; + +// 3D channel modes +#define BASS_3DMODE_NORMAL 0 // normal 3D processing +#define BASS_3DMODE_RELATIVE 1 // position is relative to the listener +#define BASS_3DMODE_OFF 2 // no 3D processing + +// software 3D mixing algorithms (used with BASS_CONFIG_3DALGORITHM) +#define BASS_3DALG_DEFAULT 0 +#define BASS_3DALG_OFF 1 +#define BASS_3DALG_FULL 2 +#define BASS_3DALG_LIGHT 3 + +// EAX environments, use with BASS_SetEAXParameters +enum +{ + EAX_ENVIRONMENT_GENERIC, + EAX_ENVIRONMENT_PADDEDCELL, + EAX_ENVIRONMENT_ROOM, + EAX_ENVIRONMENT_BATHROOM, + EAX_ENVIRONMENT_LIVINGROOM, + EAX_ENVIRONMENT_STONEROOM, + EAX_ENVIRONMENT_AUDITORIUM, + EAX_ENVIRONMENT_CONCERTHALL, + EAX_ENVIRONMENT_CAVE, + EAX_ENVIRONMENT_ARENA, + EAX_ENVIRONMENT_HANGAR, + EAX_ENVIRONMENT_CARPETEDHALLWAY, + EAX_ENVIRONMENT_HALLWAY, + EAX_ENVIRONMENT_STONECORRIDOR, + EAX_ENVIRONMENT_ALLEY, + EAX_ENVIRONMENT_FOREST, + EAX_ENVIRONMENT_CITY, + EAX_ENVIRONMENT_MOUNTAINS, + EAX_ENVIRONMENT_QUARRY, + EAX_ENVIRONMENT_PLAIN, + EAX_ENVIRONMENT_PARKINGLOT, + EAX_ENVIRONMENT_SEWERPIPE, + EAX_ENVIRONMENT_UNDERWATER, + EAX_ENVIRONMENT_DRUGGED, + EAX_ENVIRONMENT_DIZZY, + EAX_ENVIRONMENT_PSYCHOTIC, + + EAX_ENVIRONMENT_COUNT // total number of environments +}; + +// EAX presets, usage: BASS_SetEAXParameters(EAX_PRESET_xxx) +#define EAX_PRESET_GENERIC EAX_ENVIRONMENT_GENERIC,0.5F,1.493F,0.5F +#define EAX_PRESET_PADDEDCELL EAX_ENVIRONMENT_PADDEDCELL,0.25F,0.1F,0.0F +#define EAX_PRESET_ROOM EAX_ENVIRONMENT_ROOM,0.417F,0.4F,0.666F +#define EAX_PRESET_BATHROOM EAX_ENVIRONMENT_BATHROOM,0.653F,1.499F,0.166F +#define EAX_PRESET_LIVINGROOM EAX_ENVIRONMENT_LIVINGROOM,0.208F,0.478F,0.0F +#define EAX_PRESET_STONEROOM EAX_ENVIRONMENT_STONEROOM,0.5F,2.309F,0.888F +#define EAX_PRESET_AUDITORIUM EAX_ENVIRONMENT_AUDITORIUM,0.403F,4.279F,0.5F +#define EAX_PRESET_CONCERTHALL EAX_ENVIRONMENT_CONCERTHALL,0.5F,3.961F,0.5F +#define EAX_PRESET_CAVE EAX_ENVIRONMENT_CAVE,0.5F,2.886F,1.304F +#define EAX_PRESET_ARENA EAX_ENVIRONMENT_ARENA,0.361F,7.284F,0.332F +#define EAX_PRESET_HANGAR EAX_ENVIRONMENT_HANGAR,0.5F,10.0F,0.3F +#define EAX_PRESET_CARPETEDHALLWAY EAX_ENVIRONMENT_CARPETEDHALLWAY,0.153F,0.259F,2.0F +#define EAX_PRESET_HALLWAY EAX_ENVIRONMENT_HALLWAY,0.361F,1.493F,0.0F +#define EAX_PRESET_STONECORRIDOR EAX_ENVIRONMENT_STONECORRIDOR,0.444F,2.697F,0.638F +#define EAX_PRESET_ALLEY EAX_ENVIRONMENT_ALLEY,0.25F,1.752F,0.776F +#define EAX_PRESET_FOREST EAX_ENVIRONMENT_FOREST,0.111F,3.145F,0.472F +#define EAX_PRESET_CITY EAX_ENVIRONMENT_CITY,0.111F,2.767F,0.224F +#define EAX_PRESET_MOUNTAINS EAX_ENVIRONMENT_MOUNTAINS,0.194F,7.841F,0.472F +#define EAX_PRESET_QUARRY EAX_ENVIRONMENT_QUARRY,1.0F,1.499F,0.5F +#define EAX_PRESET_PLAIN EAX_ENVIRONMENT_PLAIN,0.097F,2.767F,0.224F +#define EAX_PRESET_PARKINGLOT EAX_ENVIRONMENT_PARKINGLOT,0.208F,1.652F,1.5F +#define EAX_PRESET_SEWERPIPE EAX_ENVIRONMENT_SEWERPIPE,0.652F,2.886F,0.25F +#define EAX_PRESET_UNDERWATER EAX_ENVIRONMENT_UNDERWATER,1.0F,1.499F,0.0F +#define EAX_PRESET_DRUGGED EAX_ENVIRONMENT_DRUGGED,0.875F,8.392F,1.388F +#define EAX_PRESET_DIZZY EAX_ENVIRONMENT_DIZZY,0.139F,17.234F,0.666F +#define EAX_PRESET_PSYCHOTIC EAX_ENVIRONMENT_PSYCHOTIC,0.486F,7.563F,0.806F + +typedef DWORD (CALLBACK STREAMPROC)(HSTREAM handle, void *buffer, DWORD length, void *user); +/* User stream callback function. +handle : The stream that needs writing +buffer : Buffer to write the samples in +length : Number of bytes to write +user : The 'user' parameter value given when calling BASS_StreamCreate +RETURN : Number of bytes written. Set the BASS_STREAMPROC_END flag to end the stream. */ + +#define BASS_STREAMPROC_END 0x80000000 // end of user stream flag + +// special STREAMPROCs +#define STREAMPROC_DUMMY (STREAMPROC*)0 // "dummy" stream +#define STREAMPROC_PUSH (STREAMPROC*)-1 // push stream +#define STREAMPROC_DEVICE (STREAMPROC*)-2 // device mix stream +#define STREAMPROC_DEVICE_3D (STREAMPROC*)-3 // device 3D mix stream + +// BASS_StreamCreateFileUser file systems +#define STREAMFILE_NOBUFFER 0 +#define STREAMFILE_BUFFER 1 +#define STREAMFILE_BUFFERPUSH 2 + +// User file stream callback functions +typedef void (CALLBACK FILECLOSEPROC)(void *user); +typedef QWORD (CALLBACK FILELENPROC)(void *user); +typedef DWORD (CALLBACK FILEREADPROC)(void *buffer, DWORD length, void *user); +typedef BOOL (CALLBACK FILESEEKPROC)(QWORD offset, void *user); + +typedef struct { + FILECLOSEPROC *close; + FILELENPROC *length; + FILEREADPROC *read; + FILESEEKPROC *seek; +} BASS_FILEPROCS; + +// BASS_StreamPutFileData options +#define BASS_FILEDATA_END 0 // end & close the file + +// BASS_StreamGetFilePosition modes +#define BASS_FILEPOS_CURRENT 0 +#define BASS_FILEPOS_DECODE BASS_FILEPOS_CURRENT +#define BASS_FILEPOS_DOWNLOAD 1 +#define BASS_FILEPOS_END 2 +#define BASS_FILEPOS_START 3 +#define BASS_FILEPOS_CONNECTED 4 +#define BASS_FILEPOS_BUFFER 5 +#define BASS_FILEPOS_SOCKET 6 +#define BASS_FILEPOS_ASYNCBUF 7 +#define BASS_FILEPOS_SIZE 8 +#define BASS_FILEPOS_BUFFERING 9 + +typedef void (CALLBACK DOWNLOADPROC)(const void *buffer, DWORD length, void *user); +/* Internet stream download callback function. +buffer : Buffer containing the downloaded data... NULL=end of download +length : Number of bytes in the buffer +user : The 'user' parameter value given when calling BASS_StreamCreateURL */ + +// BASS_ChannelSetSync types +#define BASS_SYNC_POS 0 +#define BASS_SYNC_END 2 +#define BASS_SYNC_META 4 +#define BASS_SYNC_SLIDE 5 +#define BASS_SYNC_STALL 6 +#define BASS_SYNC_DOWNLOAD 7 +#define BASS_SYNC_FREE 8 +#define BASS_SYNC_SETPOS 11 +#define BASS_SYNC_MUSICPOS 10 +#define BASS_SYNC_MUSICINST 1 +#define BASS_SYNC_MUSICFX 3 +#define BASS_SYNC_OGG_CHANGE 12 +#define BASS_SYNC_DEV_FAIL 14 +#define BASS_SYNC_DEV_FORMAT 15 +#define BASS_SYNC_THREAD 0x20000000 // flag: call sync in other thread +#define BASS_SYNC_MIXTIME 0x40000000 // flag: sync at mixtime, else at playtime +#define BASS_SYNC_ONETIME 0x80000000 // flag: sync only once, else continuously + +typedef void (CALLBACK SYNCPROC)(HSYNC handle, DWORD channel, DWORD data, void *user); +/* Sync callback function. +handle : The sync that has occured +channel: Channel that the sync occured in +data : Additional data associated with the sync's occurance +user : The 'user' parameter given when calling BASS_ChannelSetSync */ + +typedef void (CALLBACK DSPPROC)(HDSP handle, DWORD channel, void *buffer, DWORD length, void *user); +/* DSP callback function. +handle : The DSP handle +channel: Channel that the DSP is being applied to +buffer : Buffer to apply the DSP to +length : Number of bytes in the buffer +user : The 'user' parameter given when calling BASS_ChannelSetDSP */ + +typedef BOOL (CALLBACK RECORDPROC)(HRECORD handle, const void *buffer, DWORD length, void *user); +/* Recording callback function. +handle : The recording handle +buffer : Buffer containing the recorded sample data +length : Number of bytes +user : The 'user' parameter value given when calling BASS_RecordStart +RETURN : TRUE = continue recording, FALSE = stop */ + +// BASS_ChannelIsActive return values +#define BASS_ACTIVE_STOPPED 0 +#define BASS_ACTIVE_PLAYING 1 +#define BASS_ACTIVE_STALLED 2 +#define BASS_ACTIVE_PAUSED 3 +#define BASS_ACTIVE_PAUSED_DEVICE 4 + +// Channel attributes +#define BASS_ATTRIB_FREQ 1 +#define BASS_ATTRIB_VOL 2 +#define BASS_ATTRIB_PAN 3 +#define BASS_ATTRIB_EAXMIX 4 +#define BASS_ATTRIB_NOBUFFER 5 +#define BASS_ATTRIB_VBR 6 +#define BASS_ATTRIB_CPU 7 +#define BASS_ATTRIB_SRC 8 +#define BASS_ATTRIB_NET_RESUME 9 +#define BASS_ATTRIB_SCANINFO 10 +#define BASS_ATTRIB_NORAMP 11 +#define BASS_ATTRIB_BITRATE 12 +#define BASS_ATTRIB_BUFFER 13 +#define BASS_ATTRIB_GRANULE 14 +#define BASS_ATTRIB_MUSIC_AMPLIFY 0x100 +#define BASS_ATTRIB_MUSIC_PANSEP 0x101 +#define BASS_ATTRIB_MUSIC_PSCALER 0x102 +#define BASS_ATTRIB_MUSIC_BPM 0x103 +#define BASS_ATTRIB_MUSIC_SPEED 0x104 +#define BASS_ATTRIB_MUSIC_VOL_GLOBAL 0x105 +#define BASS_ATTRIB_MUSIC_ACTIVE 0x106 +#define BASS_ATTRIB_MUSIC_VOL_CHAN 0x200 // + channel # +#define BASS_ATTRIB_MUSIC_VOL_INST 0x300 // + instrument # + +// BASS_ChannelSlideAttribute flags +#define BASS_SLIDE_LOG 0x1000000 + +// BASS_ChannelGetData flags +#define BASS_DATA_AVAILABLE 0 // query how much data is buffered +#define BASS_DATA_FIXED 0x20000000 // flag: return 8.24 fixed-point data +#define BASS_DATA_FLOAT 0x40000000 // flag: return floating-point sample data +#define BASS_DATA_FFT256 0x80000000 // 256 sample FFT +#define BASS_DATA_FFT512 0x80000001 // 512 FFT +#define BASS_DATA_FFT1024 0x80000002 // 1024 FFT +#define BASS_DATA_FFT2048 0x80000003 // 2048 FFT +#define BASS_DATA_FFT4096 0x80000004 // 4096 FFT +#define BASS_DATA_FFT8192 0x80000005 // 8192 FFT +#define BASS_DATA_FFT16384 0x80000006 // 16384 FFT +#define BASS_DATA_FFT32768 0x80000007 // 32768 FFT +#define BASS_DATA_FFT_INDIVIDUAL 0x10 // FFT flag: FFT for each channel, else all combined +#define BASS_DATA_FFT_NOWINDOW 0x20 // FFT flag: no Hanning window +#define BASS_DATA_FFT_REMOVEDC 0x40 // FFT flag: pre-remove DC bias +#define BASS_DATA_FFT_COMPLEX 0x80 // FFT flag: return complex data +#define BASS_DATA_FFT_NYQUIST 0x100 // FFT flag: return extra Nyquist value + +// BASS_ChannelGetLevelEx flags +#define BASS_LEVEL_MONO 1 +#define BASS_LEVEL_STEREO 2 +#define BASS_LEVEL_RMS 4 +#define BASS_LEVEL_VOLPAN 8 + +// BASS_ChannelGetTags types : what's returned +#define BASS_TAG_ID3 0 // ID3v1 tags : TAG_ID3 structure +#define BASS_TAG_ID3V2 1 // ID3v2 tags : variable length block +#define BASS_TAG_OGG 2 // OGG comments : series of null-terminated UTF-8 strings +#define BASS_TAG_HTTP 3 // HTTP headers : series of null-terminated ANSI strings +#define BASS_TAG_ICY 4 // ICY headers : series of null-terminated ANSI strings +#define BASS_TAG_META 5 // ICY metadata : ANSI string +#define BASS_TAG_APE 6 // APE tags : series of null-terminated UTF-8 strings +#define BASS_TAG_MP4 7 // MP4/iTunes metadata : series of null-terminated UTF-8 strings +#define BASS_TAG_WMA 8 // WMA tags : series of null-terminated UTF-8 strings +#define BASS_TAG_VENDOR 9 // OGG encoder : UTF-8 string +#define BASS_TAG_LYRICS3 10 // Lyric3v2 tag : ASCII string +#define BASS_TAG_CA_CODEC 11 // CoreAudio codec info : TAG_CA_CODEC structure +#define BASS_TAG_MF 13 // Media Foundation tags : series of null-terminated UTF-8 strings +#define BASS_TAG_WAVEFORMAT 14 // WAVE format : WAVEFORMATEEX structure +#define BASS_TAG_AM_MIME 15 // Android Media MIME type : ASCII string +#define BASS_TAG_AM_NAME 16 // Android Media codec name : ASCII string +#define BASS_TAG_RIFF_INFO 0x100 // RIFF "INFO" tags : series of null-terminated ANSI strings +#define BASS_TAG_RIFF_BEXT 0x101 // RIFF/BWF "bext" tags : TAG_BEXT structure +#define BASS_TAG_RIFF_CART 0x102 // RIFF/BWF "cart" tags : TAG_CART structure +#define BASS_TAG_RIFF_DISP 0x103 // RIFF "DISP" text tag : ANSI string +#define BASS_TAG_RIFF_CUE 0x104 // RIFF "cue " chunk : TAG_CUE structure +#define BASS_TAG_RIFF_SMPL 0x105 // RIFF "smpl" chunk : TAG_SMPL structure +#define BASS_TAG_APE_BINARY 0x1000 // + index #, binary APE tag : TAG_APE_BINARY structure +#define BASS_TAG_MUSIC_NAME 0x10000 // MOD music name : ANSI string +#define BASS_TAG_MUSIC_MESSAGE 0x10001 // MOD message : ANSI string +#define BASS_TAG_MUSIC_ORDERS 0x10002 // MOD order list : BYTE array of pattern numbers +#define BASS_TAG_MUSIC_AUTH 0x10003 // MOD author : UTF-8 string +#define BASS_TAG_MUSIC_INST 0x10100 // + instrument #, MOD instrument name : ANSI string +#define BASS_TAG_MUSIC_SAMPLE 0x10300 // + sample #, MOD sample name : ANSI string + +// ID3v1 tag structure +typedef struct { + char id[3]; + char title[30]; + char artist[30]; + char album[30]; + char year[4]; + char comment[30]; + BYTE genre; +} TAG_ID3; + +// Binary APE tag structure +typedef struct { + const char *key; + const void *data; + DWORD length; +} TAG_APE_BINARY; + +// BWF "bext" tag structure +#ifdef _MSC_VER +#pragma warning(push) +#pragma warning(disable:4200) +#endif +#pragma pack(push,1) +typedef struct { + char Description[256]; // description + char Originator[32]; // name of the originator + char OriginatorReference[32]; // reference of the originator + char OriginationDate[10]; // date of creation (yyyy-mm-dd) + char OriginationTime[8]; // time of creation (hh-mm-ss) + QWORD TimeReference; // first sample count since midnight (little-endian) + WORD Version; // BWF version (little-endian) + BYTE UMID[64]; // SMPTE UMID + BYTE Reserved[190]; +#if defined(__GNUC__) && __GNUC__<3 + char CodingHistory[0]; // history +#elif 1 // change to 0 if compiler fails the following line + char CodingHistory[]; // history +#else + char CodingHistory[1]; // history +#endif +} TAG_BEXT; +#pragma pack(pop) + +// BWF "cart" tag structures +typedef struct +{ + DWORD dwUsage; // FOURCC timer usage ID + DWORD dwValue; // timer value in samples from head +} TAG_CART_TIMER; + +typedef struct +{ + char Version[4]; // version of the data structure + char Title[64]; // title of cart audio sequence + char Artist[64]; // artist or creator name + char CutID[64]; // cut number identification + char ClientID[64]; // client identification + char Category[64]; // category ID, PSA, NEWS, etc + char Classification[64]; // classification or auxiliary key + char OutCue[64]; // out cue text + char StartDate[10]; // yyyy-mm-dd + char StartTime[8]; // hh:mm:ss + char EndDate[10]; // yyyy-mm-dd + char EndTime[8]; // hh:mm:ss + char ProducerAppID[64]; // name of vendor or application + char ProducerAppVersion[64]; // version of producer application + char UserDef[64]; // user defined text + DWORD dwLevelReference; // sample value for 0 dB reference + TAG_CART_TIMER PostTimer[8]; // 8 time markers after head + char Reserved[276]; + char URL[1024]; // uniform resource locator +#if defined(__GNUC__) && __GNUC__<3 + char TagText[0]; // free form text for scripts or tags +#elif 1 // change to 0 if compiler fails the following line + char TagText[]; // free form text for scripts or tags +#else + char TagText[1]; // free form text for scripts or tags +#endif +} TAG_CART; + +// RIFF "cue " tag structures +typedef struct +{ + DWORD dwName; + DWORD dwPosition; + DWORD fccChunk; + DWORD dwChunkStart; + DWORD dwBlockStart; + DWORD dwSampleOffset; +} TAG_CUE_POINT; + +typedef struct +{ + DWORD dwCuePoints; +#if defined(__GNUC__) && __GNUC__<3 + TAG_CUE_POINT CuePoints[0]; +#elif 1 // change to 0 if compiler fails the following line + TAG_CUE_POINT CuePoints[]; +#else + TAG_CUE_POINT CuePoints[1]; +#endif +} TAG_CUE; + +// RIFF "smpl" tag structures +typedef struct +{ + DWORD dwIdentifier; + DWORD dwType; + DWORD dwStart; + DWORD dwEnd; + DWORD dwFraction; + DWORD dwPlayCount; +} TAG_SMPL_LOOP; + +typedef struct +{ + DWORD dwManufacturer; + DWORD dwProduct; + DWORD dwSamplePeriod; + DWORD dwMIDIUnityNote; + DWORD dwMIDIPitchFraction; + DWORD dwSMPTEFormat; + DWORD dwSMPTEOffset; + DWORD cSampleLoops; + DWORD cbSamplerData; +#if defined(__GNUC__) && __GNUC__<3 + TAG_SMPL_LOOP SampleLoops[0]; +#elif 1 // change to 0 if compiler fails the following line + TAG_SMPL_LOOP SampleLoops[]; +#else + TAG_SMPL_LOOP SampleLoops[1]; +#endif +} TAG_SMPL; +#ifdef _MSC_VER +#pragma warning(pop) +#endif + +// CoreAudio codec info structure +typedef struct { + DWORD ftype; // file format + DWORD atype; // audio format + const char *name; // description +} TAG_CA_CODEC; + +#ifndef _WAVEFORMATEX_ +#define _WAVEFORMATEX_ +#pragma pack(push,1) +typedef struct tWAVEFORMATEX +{ + WORD wFormatTag; + WORD nChannels; + DWORD nSamplesPerSec; + DWORD nAvgBytesPerSec; + WORD nBlockAlign; + WORD wBitsPerSample; + WORD cbSize; +} WAVEFORMATEX, *PWAVEFORMATEX, *LPWAVEFORMATEX; +typedef const WAVEFORMATEX *LPCWAVEFORMATEX; +#pragma pack(pop) +#endif + +// BASS_ChannelGetLength/GetPosition/SetPosition modes +#define BASS_POS_BYTE 0 // byte position +#define BASS_POS_MUSIC_ORDER 1 // order.row position, MAKELONG(order,row) +#define BASS_POS_OGG 3 // OGG bitstream number +#define BASS_POS_RESET 0x2000000 // flag: reset user file buffers +#define BASS_POS_RELATIVE 0x4000000 // flag: seek relative to the current position +#define BASS_POS_INEXACT 0x8000000 // flag: allow seeking to inexact position +#define BASS_POS_DECODE 0x10000000 // flag: get the decoding (not playing) position +#define BASS_POS_DECODETO 0x20000000 // flag: decode to the position instead of seeking +#define BASS_POS_SCAN 0x40000000 // flag: scan to the position + +// BASS_ChannelSetDevice/GetDevice option +#define BASS_NODEVICE 0x20000 + +// BASS_RecordSetInput flags +#define BASS_INPUT_OFF 0x10000 +#define BASS_INPUT_ON 0x20000 + +#define BASS_INPUT_TYPE_MASK 0xff000000 +#define BASS_INPUT_TYPE_UNDEF 0x00000000 +#define BASS_INPUT_TYPE_DIGITAL 0x01000000 +#define BASS_INPUT_TYPE_LINE 0x02000000 +#define BASS_INPUT_TYPE_MIC 0x03000000 +#define BASS_INPUT_TYPE_SYNTH 0x04000000 +#define BASS_INPUT_TYPE_CD 0x05000000 +#define BASS_INPUT_TYPE_PHONE 0x06000000 +#define BASS_INPUT_TYPE_SPEAKER 0x07000000 +#define BASS_INPUT_TYPE_WAVE 0x08000000 +#define BASS_INPUT_TYPE_AUX 0x09000000 +#define BASS_INPUT_TYPE_ANALOG 0x0a000000 + +// BASS_ChannelSetFX effect types +#define BASS_FX_DX8_CHORUS 0 +#define BASS_FX_DX8_COMPRESSOR 1 +#define BASS_FX_DX8_DISTORTION 2 +#define BASS_FX_DX8_ECHO 3 +#define BASS_FX_DX8_FLANGER 4 +#define BASS_FX_DX8_GARGLE 5 +#define BASS_FX_DX8_I3DL2REVERB 6 +#define BASS_FX_DX8_PARAMEQ 7 +#define BASS_FX_DX8_REVERB 8 +#define BASS_FX_VOLUME 9 + +typedef struct { + float fWetDryMix; + float fDepth; + float fFeedback; + float fFrequency; + DWORD lWaveform; // 0=triangle, 1=sine + float fDelay; + DWORD lPhase; // BASS_DX8_PHASE_xxx +} BASS_DX8_CHORUS; + +typedef struct { + float fGain; + float fAttack; + float fRelease; + float fThreshold; + float fRatio; + float fPredelay; +} BASS_DX8_COMPRESSOR; + +typedef struct { + float fGain; + float fEdge; + float fPostEQCenterFrequency; + float fPostEQBandwidth; + float fPreLowpassCutoff; +} BASS_DX8_DISTORTION; + +typedef struct { + float fWetDryMix; + float fFeedback; + float fLeftDelay; + float fRightDelay; + BOOL lPanDelay; +} BASS_DX8_ECHO; + +typedef struct { + float fWetDryMix; + float fDepth; + float fFeedback; + float fFrequency; + DWORD lWaveform; // 0=triangle, 1=sine + float fDelay; + DWORD lPhase; // BASS_DX8_PHASE_xxx +} BASS_DX8_FLANGER; + +typedef struct { + DWORD dwRateHz; // Rate of modulation in hz + DWORD dwWaveShape; // 0=triangle, 1=square +} BASS_DX8_GARGLE; + +typedef struct { + int lRoom; // [-10000, 0] default: -1000 mB + int lRoomHF; // [-10000, 0] default: 0 mB + float flRoomRolloffFactor; // [0.0, 10.0] default: 0.0 + float flDecayTime; // [0.1, 20.0] default: 1.49s + float flDecayHFRatio; // [0.1, 2.0] default: 0.83 + int lReflections; // [-10000, 1000] default: -2602 mB + float flReflectionsDelay; // [0.0, 0.3] default: 0.007 s + int lReverb; // [-10000, 2000] default: 200 mB + float flReverbDelay; // [0.0, 0.1] default: 0.011 s + float flDiffusion; // [0.0, 100.0] default: 100.0 % + float flDensity; // [0.0, 100.0] default: 100.0 % + float flHFReference; // [20.0, 20000.0] default: 5000.0 Hz +} BASS_DX8_I3DL2REVERB; + +typedef struct { + float fCenter; + float fBandwidth; + float fGain; +} BASS_DX8_PARAMEQ; + +typedef struct { + float fInGain; // [-96.0,0.0] default: 0.0 dB + float fReverbMix; // [-96.0,0.0] default: 0.0 db + float fReverbTime; // [0.001,3000.0] default: 1000.0 ms + float fHighFreqRTRatio; // [0.001,0.999] default: 0.001 +} BASS_DX8_REVERB; + +#define BASS_DX8_PHASE_NEG_180 0 +#define BASS_DX8_PHASE_NEG_90 1 +#define BASS_DX8_PHASE_ZERO 2 +#define BASS_DX8_PHASE_90 3 +#define BASS_DX8_PHASE_180 4 + +typedef struct { + float fTarget; + float fCurrent; + float fTime; + DWORD lCurve; +} BASS_FX_VOLUME_PARAM; + +typedef void (CALLBACK IOSNOTIFYPROC)(DWORD status); +/* iOS notification callback function. +status : The notification (BASS_IOSNOTIFY_xxx) */ + +#define BASS_IOSNOTIFY_INTERRUPT 1 // interruption started +#define BASS_IOSNOTIFY_INTERRUPT_END 2 // interruption ended + +BOOL BASSDEF(BASS_SetConfig)(DWORD option, DWORD value); +DWORD BASSDEF(BASS_GetConfig)(DWORD option); +BOOL BASSDEF(BASS_SetConfigPtr)(DWORD option, const void *value); +void *BASSDEF(BASS_GetConfigPtr)(DWORD option); +DWORD BASSDEF(BASS_GetVersion)(); +int BASSDEF(BASS_ErrorGetCode)(); +BOOL BASSDEF(BASS_GetDeviceInfo)(DWORD device, BASS_DEVICEINFO *info); +#if defined(_WIN32) && !defined(_WIN32_WCE) && !(WINAPI_FAMILY && WINAPI_FAMILY!=WINAPI_FAMILY_DESKTOP_APP) +BOOL BASSDEF(BASS_Init)(int device, DWORD freq, DWORD flags, HWND win, const GUID *dsguid); +#else +BOOL BASSDEF(BASS_Init)(int device, DWORD freq, DWORD flags, void *win, void *dsguid); +#endif +BOOL BASSDEF(BASS_SetDevice)(DWORD device); +DWORD BASSDEF(BASS_GetDevice)(); +BOOL BASSDEF(BASS_Free)(); +#if defined(_WIN32) && !defined(_WIN32_WCE) && !(WINAPI_FAMILY && WINAPI_FAMILY!=WINAPI_FAMILY_DESKTOP_APP) +void *BASSDEF(BASS_GetDSoundObject)(DWORD object); +#endif +BOOL BASSDEF(BASS_GetInfo)(BASS_INFO *info); +BOOL BASSDEF(BASS_Update)(DWORD length); +float BASSDEF(BASS_GetCPU)(); +BOOL BASSDEF(BASS_Start)(); +BOOL BASSDEF(BASS_Stop)(); +BOOL BASSDEF(BASS_Pause)(); +BOOL BASSDEF(BASS_IsStarted)(); +BOOL BASSDEF(BASS_SetVolume)(float volume); +float BASSDEF(BASS_GetVolume)(); + +HPLUGIN BASSDEF(BASS_PluginLoad)(const char *file, DWORD flags); +BOOL BASSDEF(BASS_PluginFree)(HPLUGIN handle); +const BASS_PLUGININFO *BASSDEF(BASS_PluginGetInfo)(HPLUGIN handle); + +BOOL BASSDEF(BASS_Set3DFactors)(float distf, float rollf, float doppf); +BOOL BASSDEF(BASS_Get3DFactors)(float *distf, float *rollf, float *doppf); +BOOL BASSDEF(BASS_Set3DPosition)(const BASS_3DVECTOR *pos, const BASS_3DVECTOR *vel, const BASS_3DVECTOR *front, const BASS_3DVECTOR *top); +BOOL BASSDEF(BASS_Get3DPosition)(BASS_3DVECTOR *pos, BASS_3DVECTOR *vel, BASS_3DVECTOR *front, BASS_3DVECTOR *top); +void BASSDEF(BASS_Apply3D)(); +#if defined(_WIN32) && !defined(_WIN32_WCE) && !(WINAPI_FAMILY && WINAPI_FAMILY!=WINAPI_FAMILY_DESKTOP_APP) +BOOL BASSDEF(BASS_SetEAXParameters)(int env, float vol, float decay, float damp); +BOOL BASSDEF(BASS_GetEAXParameters)(DWORD *env, float *vol, float *decay, float *damp); +#endif + +HMUSIC BASSDEF(BASS_MusicLoad)(BOOL mem, const void *file, QWORD offset, DWORD length, DWORD flags, DWORD freq); +BOOL BASSDEF(BASS_MusicFree)(HMUSIC handle); + +HSAMPLE BASSDEF(BASS_SampleLoad)(BOOL mem, const void *file, QWORD offset, DWORD length, DWORD max, DWORD flags); +HSAMPLE BASSDEF(BASS_SampleCreate)(DWORD length, DWORD freq, DWORD chans, DWORD max, DWORD flags); +BOOL BASSDEF(BASS_SampleFree)(HSAMPLE handle); +BOOL BASSDEF(BASS_SampleSetData)(HSAMPLE handle, const void *buffer); +BOOL BASSDEF(BASS_SampleGetData)(HSAMPLE handle, void *buffer); +BOOL BASSDEF(BASS_SampleGetInfo)(HSAMPLE handle, BASS_SAMPLE *info); +BOOL BASSDEF(BASS_SampleSetInfo)(HSAMPLE handle, const BASS_SAMPLE *info); +HCHANNEL BASSDEF(BASS_SampleGetChannel)(HSAMPLE handle, BOOL onlynew); +DWORD BASSDEF(BASS_SampleGetChannels)(HSAMPLE handle, HCHANNEL *channels); +BOOL BASSDEF(BASS_SampleStop)(HSAMPLE handle); + +HSTREAM BASSDEF(BASS_StreamCreate)(DWORD freq, DWORD chans, DWORD flags, STREAMPROC *proc, void *user); +HSTREAM BASSDEF(BASS_StreamCreateFile)(BOOL mem, const void *file, QWORD offset, QWORD length, DWORD flags); +HSTREAM BASSDEF(BASS_StreamCreateURL)(const char *url, DWORD offset, DWORD flags, DOWNLOADPROC *proc, void *user); +HSTREAM BASSDEF(BASS_StreamCreateFileUser)(DWORD system, DWORD flags, const BASS_FILEPROCS *proc, void *user); +BOOL BASSDEF(BASS_StreamFree)(HSTREAM handle); +QWORD BASSDEF(BASS_StreamGetFilePosition)(HSTREAM handle, DWORD mode); +DWORD BASSDEF(BASS_StreamPutData)(HSTREAM handle, const void *buffer, DWORD length); +DWORD BASSDEF(BASS_StreamPutFileData)(HSTREAM handle, const void *buffer, DWORD length); + +BOOL BASSDEF(BASS_RecordGetDeviceInfo)(DWORD device, BASS_DEVICEINFO *info); +BOOL BASSDEF(BASS_RecordInit)(int device); +BOOL BASSDEF(BASS_RecordSetDevice)(DWORD device); +DWORD BASSDEF(BASS_RecordGetDevice)(); +BOOL BASSDEF(BASS_RecordFree)(); +BOOL BASSDEF(BASS_RecordGetInfo)(BASS_RECORDINFO *info); +const char *BASSDEF(BASS_RecordGetInputName)(int input); +BOOL BASSDEF(BASS_RecordSetInput)(int input, DWORD flags, float volume); +DWORD BASSDEF(BASS_RecordGetInput)(int input, float *volume); +HRECORD BASSDEF(BASS_RecordStart)(DWORD freq, DWORD chans, DWORD flags, RECORDPROC *proc, void *user); + +double BASSDEF(BASS_ChannelBytes2Seconds)(DWORD handle, QWORD pos); +QWORD BASSDEF(BASS_ChannelSeconds2Bytes)(DWORD handle, double pos); +DWORD BASSDEF(BASS_ChannelGetDevice)(DWORD handle); +BOOL BASSDEF(BASS_ChannelSetDevice)(DWORD handle, DWORD device); +DWORD BASSDEF(BASS_ChannelIsActive)(DWORD handle); +BOOL BASSDEF(BASS_ChannelGetInfo)(DWORD handle, BASS_CHANNELINFO *info); +const char *BASSDEF(BASS_ChannelGetTags)(DWORD handle, DWORD tags); +DWORD BASSDEF(BASS_ChannelFlags)(DWORD handle, DWORD flags, DWORD mask); +BOOL BASSDEF(BASS_ChannelUpdate)(DWORD handle, DWORD length); +BOOL BASSDEF(BASS_ChannelLock)(DWORD handle, BOOL lock); +BOOL BASSDEF(BASS_ChannelPlay)(DWORD handle, BOOL restart); +BOOL BASSDEF(BASS_ChannelStop)(DWORD handle); +BOOL BASSDEF(BASS_ChannelPause)(DWORD handle); +BOOL BASSDEF(BASS_ChannelSetAttribute)(DWORD handle, DWORD attrib, float value); +BOOL BASSDEF(BASS_ChannelGetAttribute)(DWORD handle, DWORD attrib, float *value); +BOOL BASSDEF(BASS_ChannelSlideAttribute)(DWORD handle, DWORD attrib, float value, DWORD time); +BOOL BASSDEF(BASS_ChannelIsSliding)(DWORD handle, DWORD attrib); +BOOL BASSDEF(BASS_ChannelSetAttributeEx)(DWORD handle, DWORD attrib, void *value, DWORD size); +DWORD BASSDEF(BASS_ChannelGetAttributeEx)(DWORD handle, DWORD attrib, void *value, DWORD size); +BOOL BASSDEF(BASS_ChannelSet3DAttributes)(DWORD handle, int mode, float min, float max, int iangle, int oangle, float outvol); +BOOL BASSDEF(BASS_ChannelGet3DAttributes)(DWORD handle, DWORD *mode, float *min, float *max, DWORD *iangle, DWORD *oangle, float *outvol); +BOOL BASSDEF(BASS_ChannelSet3DPosition)(DWORD handle, const BASS_3DVECTOR *pos, const BASS_3DVECTOR *orient, const BASS_3DVECTOR *vel); +BOOL BASSDEF(BASS_ChannelGet3DPosition)(DWORD handle, BASS_3DVECTOR *pos, BASS_3DVECTOR *orient, BASS_3DVECTOR *vel); +QWORD BASSDEF(BASS_ChannelGetLength)(DWORD handle, DWORD mode); +BOOL BASSDEF(BASS_ChannelSetPosition)(DWORD handle, QWORD pos, DWORD mode); +QWORD BASSDEF(BASS_ChannelGetPosition)(DWORD handle, DWORD mode); +DWORD BASSDEF(BASS_ChannelGetLevel)(DWORD handle); +BOOL BASSDEF(BASS_ChannelGetLevelEx)(DWORD handle, float *levels, float length, DWORD flags); +DWORD BASSDEF(BASS_ChannelGetData)(DWORD handle, void *buffer, DWORD length); +HSYNC BASSDEF(BASS_ChannelSetSync)(DWORD handle, DWORD type, QWORD param, SYNCPROC *proc, void *user); +BOOL BASSDEF(BASS_ChannelRemoveSync)(DWORD handle, HSYNC sync); +HDSP BASSDEF(BASS_ChannelSetDSP)(DWORD handle, DSPPROC *proc, void *user, int priority); +BOOL BASSDEF(BASS_ChannelRemoveDSP)(DWORD handle, HDSP dsp); +BOOL BASSDEF(BASS_ChannelSetLink)(DWORD handle, DWORD chan); +BOOL BASSDEF(BASS_ChannelRemoveLink)(DWORD handle, DWORD chan); +HFX BASSDEF(BASS_ChannelSetFX)(DWORD handle, DWORD type, int priority); +BOOL BASSDEF(BASS_ChannelRemoveFX)(DWORD handle, HFX fx); + +BOOL BASSDEF(BASS_FXSetParameters)(HFX handle, const void *params); +BOOL BASSDEF(BASS_FXGetParameters)(HFX handle, void *params); +BOOL BASSDEF(BASS_FXReset)(HFX handle); +BOOL BASSDEF(BASS_FXSetPriority)(HFX handle, int priority); + +#ifdef __cplusplus +} + +#if defined(_WIN32) && !defined(NOBASSOVERLOADS) +static inline HPLUGIN BASS_PluginLoad(const WCHAR *file, DWORD flags) +{ + return BASS_PluginLoad((const char*)file, flags|BASS_UNICODE); +} + +static inline HMUSIC BASS_MusicLoad(BOOL mem, const WCHAR *file, QWORD offset, DWORD length, DWORD flags, DWORD freq) +{ + return BASS_MusicLoad(mem, (const void*)file, offset, length, flags|BASS_UNICODE, freq); +} + +static inline HSAMPLE BASS_SampleLoad(BOOL mem, const WCHAR *file, QWORD offset, DWORD length, DWORD max, DWORD flags) +{ + return BASS_SampleLoad(mem, (const void*)file, offset, length, max, flags|BASS_UNICODE); +} + +static inline HSTREAM BASS_StreamCreateFile(BOOL mem, const WCHAR *file, QWORD offset, QWORD length, DWORD flags) +{ + return BASS_StreamCreateFile(mem, (const void*)file, offset, length, flags|BASS_UNICODE); +} + +static inline HSTREAM BASS_StreamCreateURL(const WCHAR *url, DWORD offset, DWORD flags, DOWNLOADPROC *proc, void *user) +{ + return BASS_StreamCreateURL((const char*)url, offset, flags|BASS_UNICODE, proc, user); +} + +static inline BOOL BASS_SetConfigPtr(DWORD option, const WCHAR *value) +{ + return BASS_SetConfigPtr(option|BASS_UNICODE, (const void*)value); +} +#endif +#endif + +#endif diff --git a/hsmodem/bass.lib b/hsmodem/bass.lib new file mode 100755 index 0000000..0f04905 Binary files /dev/null and b/hsmodem/bass.lib differ diff --git a/hsmodem/constellation.cpp b/hsmodem/constellation.cpp new file mode 100755 index 0000000..c97b6b1 --- /dev/null +++ b/hsmodem/constellation.cpp @@ -0,0 +1,207 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "hsmodem.h" + +// functions for non-differential QPSK +// depending on the phase shift rotate a data blocks constellation + +//uint8_t headerbytes[HEADERLEN] = {0x53, 0xe1, 0xa6}; +// corresponds to these QPSK symbols: +// bits: 01010011 11100001 10100110 +// syms: 1 1 0 3 3 2 0 1 2 2 1 2 + +uint8_t rxbytebuf[UDPBLOCKLEN+100]; // +100 ... reserve, just to be sure + +uint8_t *convertQPSKSymToBytes(uint8_t *rxsymbols) +{ + int sidx = 0; + for(int i=0; i> 6) & 3; + syms[symidx++] = (bytes[i] >> 4) & 3; + syms[symidx++] = (bytes[i] >> 2) & 3; + syms[symidx++] = (bytes[i] >> 0) & 3; + } +} + +void rotateQPSKsyms(uint8_t *src, uint8_t *dst, int len) +{ + for(int i=0; i> 5) & 7; + syms[symidx++] = (bytes[0+i] >> 2) & 7; + syms[symidx++] = ((bytes[0+i] & 3) << 1) | ((bytes[1+i] >> 7) & 1); + syms[symidx++] = (bytes[1+i] >> 4) & 7; + syms[symidx++] = (bytes[1+i] >> 1) & 7; + syms[symidx++] = ((bytes[1+i] & 1) << 2) | ((bytes[2+i] >> 6) & 3); + syms[symidx++] = (bytes[2+i] >> 3) & 7; + syms[symidx++] = bytes[2+i] & 7; + } +} + +void rotate8PSKsyms(uint8_t *src, uint8_t *dst, int len) +{ + for(int i=0; i> 1; + rxbytebuf[i+1] = rxsymbols[sidx++] << 7; + rxbytebuf[i+1] |= rxsymbols[sidx++] << 4; + rxbytebuf[i+1] |= rxsymbols[sidx++] << 1; + rxbytebuf[i+1] |= rxsymbols[sidx] >> 2; + rxbytebuf[i+2] = rxsymbols[sidx++] << 6; + rxbytebuf[i+2] |= rxsymbols[sidx++] << 3; + rxbytebuf[i+2] |= rxsymbols[sidx++]; + } + return rxbytebuf; +} + +void shiftleft(uint8_t *data, int shiftnum, int len) +{ + for(int j=0; j=0; i--) + { + b1 = (data[i] & 0x80)>>7; + data[i] <<= 1; + data[i] |= b2; + b2 = b1; + } + } +} diff --git a/hsmodem/crc16.cpp b/hsmodem/crc16.cpp new file mode 100755 index 0000000..d0b9c47 --- /dev/null +++ b/hsmodem/crc16.cpp @@ -0,0 +1,83 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "hsmodem.h" + +// since we use a static crc register we need TWO separated registers +// for RX and TX to get it thread safe, no.2 is for file ID generation + +uint16_t reg16[3] = {0xffff,0xffff}; // shift register + +uint16_t Crc16_bytecalc(int rxtx, uint8_t byt) +{ + uint16_t polynom = 0x8408; // generator polynom + + for (int i = 0; i < 8; ++i) + { + if ((reg16[rxtx] & 1) != (byt & 1)) + reg16[rxtx] = (uint16_t)((reg16[rxtx] >> 1) ^ polynom); + else + reg16[rxtx] >>= 1; + byt >>= 1; + } + return reg16[rxtx]; +} + +uint16_t Crc16_messagecalc(int rxtx, uint8_t *data,int len) +{ + reg16[rxtx] = 0xffff; + for (int i = 0; i < len; i++) + reg16[rxtx] = Crc16_bytecalc(rxtx,data[i]); + return reg16[rxtx]; +} + +// ================================================================= + +uint32_t reg32[2] = {0xffffffff,0xffffffff}; // Shiftregister + +void crc32_bytecalc(int rxtx, unsigned char byte) +{ +int i; +uint32_t polynom = 0xEDB88320; // Generatorpolynom + + for (i=0; i<8; ++i) + { + if ((reg32[rxtx]&1) != (byte&1)) + reg32[rxtx] = (reg32[rxtx]>>1)^polynom; + else + reg32[rxtx] >>= 1; + byte >>= 1; + } +} + +uint32_t crc32_messagecalc(int rxtx, unsigned char *data, int len) +{ +int i; + + reg32[rxtx] = 0xffffffff; + for(i=0; i +#include +#include + +#include "fec/schifra_galois_field.hpp" +#include "fec/schifra_galois_field_polynomial.hpp" +#include "fec/schifra_sequential_root_generator_polynomial_creator.hpp" +#include "fec/schifra_reed_solomon_encoder.hpp" +#include "fec/schifra_reed_solomon_decoder.hpp" +#include "fec/schifra_reed_solomon_block.hpp" +#include "fec/schifra_error_processes.hpp" + +/* Finite Field Parameters */ +const std::size_t field_descriptor = 8; +const std::size_t generator_polynomial_index = 120; +const std::size_t generator_polynomial_root_count = FECLEN; + +/* Reed Solomon Code Parameters */ +const std::size_t code_length = FECBLOCKLEN; +const std::size_t fec_length = FECLEN; +const std::size_t data_length = code_length - fec_length; + +/* Instantiate Finite Field and Generator Polynomials */ +const schifra::galois::field field(field_descriptor, + schifra::galois::primitive_polynomial_size06, + schifra::galois::primitive_polynomial06); + +schifra::galois::field_polynomial generator_polynomial(field); + +/* Instantiate Encoder and Decoder (Codec) */ +typedef schifra::reed_solomon::encoder encoder_t; +typedef schifra::reed_solomon::decoder decoder_t; + + + + +int cfec_Reconstruct(uint8_t *darr, uint8_t *destination) +{ +schifra::reed_solomon::block rxblock; + + for(std::size_t i=0; i block; + + // fill payload into an FEC-block + for(std::size_t i=0; i + +typedef unsigned char gf; + +typedef struct { + unsigned long magic; + unsigned short k, n; /* parameters of the code */ + gf* enc_matrix; +} fec_t; + +#if defined(_MSC_VER) +// actually, some of the flavors (i.e. Enterprise) do support restrict +//#define restrict __restrict +#define restrict +#endif + +/** + * param k the number of blocks required to reconstruct + * param m the total number of blocks created + */ +fec_t* fec_new(unsigned short k, unsigned short m); +void fec_free(fec_t* p); + +/** + * @param inpkts the "primary blocks" i.e. the chunks of the input data + * @param fecs buffers into which the secondary blocks will be written + * @param block_nums the numbers of the desired check blocks (the id >= k) which fec_encode() will produce and store into the buffers of the fecs parameter + * @param num_block_nums the length of the block_nums array + * @param sz size of a packet in bytes + */ +void fec_encode(const fec_t* code, const gf** src, gf** fecs, size_t sz); + +/** + * @param inpkts an array of packets (size k); If a primary block, i, is present then it must be at index i. Secondary blocks can appear anywhere. + * @param outpkts an array of buffers into which the reconstructed output packets will be written (only packets which are not present in the inpkts input will be reconstructed and written to outpkts) + * @param index an array of the blocknums of the packets in inpkts + * @param sz size of a packet in bytes + */ +void fec_decode(const fec_t* code, const gf** inpkts, gf** outpkts, const unsigned* index, size_t sz); + +#if defined(_MSC_VER) +#define alloca _alloca +#else +#ifdef __GNUC__ +#ifndef alloca +#define alloca(x) __builtin_alloca(x) +#endif +#else +#include +#endif +#endif + +/** + * zfec -- fast forward error correction library with Python interface + * + * Copyright (C) 2007-2008 Allmydata, Inc. + * Author: Zooko Wilcox-O'Hearn + * + * This file is part of zfec. + * + * See README.rst for licensing information. + */ + +/* + * Much of this work is derived from the "fec" software by Luigi Rizzo, et + * al., the copyright notice and licence terms of which are included below + * for reference. + * + * fec.h -- forward error correction based on Vandermonde matrices + * 980614 + * (C) 1997-98 Luigi Rizzo (luigi@iet.unipi.it) + * + * Portions derived from code by Phil Karn (karn@ka9q.ampr.org), + * Robert Morelos-Zaragoza (robert@spectra.eng.hawaii.edu) and Hari + * Thirumoorthy (harit@spectra.eng.hawaii.edu), Aug 1995 + * + * Modifications by Dan Rubenstein (see Modifications.txt for + * their description. + * Modifications (C) 1998 Dan Rubenstein (drubenst@cs.umass.edu) + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A + * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, + * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, + * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR + * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY + * OF SUCH DAMAGE. + */ + diff --git a/hsmodem/fec/schifra_crc.hpp b/hsmodem/fec/schifra_crc.hpp new file mode 100644 index 0000000..62b1073 --- /dev/null +++ b/hsmodem/fec/schifra_crc.hpp @@ -0,0 +1,172 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_CRC_HPP +#define INCLUDE_SCHIFRA_CRC_HPP + + +#include +#include + + +namespace schifra +{ + + class crc32 + { + public: + + typedef std::size_t crc32_t; + + crc32(const crc32_t& _key, const crc32_t& _state = 0x00) + : key(_key), + state(_state), + initial_state(_state) + { + initialize_crc32_table(); + } + + void reset() + { + state = initial_state; + } + + void update_1byte(const unsigned char data) + { + state = (state >> 8) ^ table[data]; + } + + void update(const unsigned char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(data[i]); + } + } + + void update(char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::string& data) + { + for (std::size_t i = 0; i < data.size(); ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::size_t& data) + { + update_1byte(static_cast((data ) & 0xFF)); + update_1byte(static_cast((data >> 8) & 0xFF)); + update_1byte(static_cast((data >> 16) & 0xFF)); + update_1byte(static_cast((data >> 24) & 0xFF)); + } + + crc32_t crc() + { + return state; + } + + private: + + crc32& operator=(const crc32&); + + void initialize_crc32_table() + { + for (std::size_t i = 0; i < 0xFF; ++i) + { + crc32_t reg = i; + + for (int j = 0; j < 0x08; ++j) + { + reg = ((reg & 1) ? (reg >> 1) ^ key : reg >> 1); + } + + table[i] = reg; + } + } + + protected: + + crc32_t key; + crc32_t state; + const crc32_t initial_state; + crc32_t table[256]; + }; + + class schifra_crc : public crc32 + { + public: + + schifra_crc(const crc32_t _key) + : crc32(_key,0xAAAAAAAA) + {} + + void update(const unsigned char& data) + { + state = ((state >> 8) ^ table[data]) ^ ((state << 8) ^ table[~data]); + } + + void update(const unsigned char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(data[i]); + } + } + + void update(const char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::string& data) + { + for (std::size_t i = 0; i < data.size(); ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::size_t& data) + { + update_1byte(static_cast((data ) & 0xFF)); + update_1byte(static_cast((data >> 8) & 0xFF)); + update_1byte(static_cast((data >> 16) & 0xFF)); + update_1byte(static_cast((data >> 24) & 0xFF)); + } + + }; + +} // namespace schifra + + +#endif diff --git a/hsmodem/fec/schifra_ecc_traits.hpp b/hsmodem/fec/schifra_ecc_traits.hpp new file mode 100644 index 0000000..879d056 --- /dev/null +++ b/hsmodem/fec/schifra_ecc_traits.hpp @@ -0,0 +1,109 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ECC_TRAITS_HPP +#define INCLUDE_SCHIFRA_ECC_TRAITS_HPP + + +namespace schifra +{ + namespace traits + { + + template struct symbol; + /* bits per symbol */ + template <> struct symbol< 3> { enum {size = 2}; }; + template <> struct symbol< 7> { enum {size = 3}; }; + template <> struct symbol< 15> { enum {size = 4}; }; + template <> struct symbol< 31> { enum {size = 5}; }; + template <> struct symbol< 63> { enum {size = 6}; }; + template <> struct symbol< 127> { enum {size = 7}; }; + template <> struct symbol< 255> { enum {size = 8}; }; + template <> struct symbol< 511> { enum {size = 9}; }; + template <> struct symbol< 1023> { enum {size = 10}; }; + template <> struct symbol< 2047> { enum {size = 11}; }; + template <> struct symbol< 4195> { enum {size = 12}; }; + template <> struct symbol< 8191> { enum {size = 13}; }; + template <> struct symbol<16383> { enum {size = 14}; }; + template <> struct symbol<32768> { enum {size = 15}; }; + template <> struct symbol<65535> { enum {size = 16}; }; + + /* Credits: Modern C++ Design - Andrei Alexandrescu */ + template class __static_assert__ + { + public: + + __static_assert__(...) {} + }; + + template <> class __static_assert__ {}; + template <> class __static_assert__; + + template + struct validate_reed_solomon_code_parameters + { + private: + + __static_assert__<(code_length > 0)> assertion1; + __static_assert__<(code_length > fec_length)> assertion2; + __static_assert__<(code_length > data_length)> assertion3; + __static_assert__<(code_length == fec_length + data_length)> assertion4; + }; + + template + struct validate_reed_solomon_block_parameters + { + private: + + __static_assert__<(code_length > 0)> assertion1; + __static_assert__<(code_length > fec_length)> assertion2; + __static_assert__<(code_length > data_length)> assertion3; + __static_assert__<(code_length == fec_length + data_length)> assertion4; + }; + + template + struct equivalent_encoder_decoder + { + private: + + __static_assert__<(Encoder::trait::code_length == Decoder::trait::code_length)> assertion1; + __static_assert__<(Encoder::trait::fec_length == Decoder::trait::fec_length) > assertion2; + __static_assert__<(Encoder::trait::data_length == Decoder::trait::data_length)> assertion3; + }; + + template + class reed_solomon_triat + { + public: + + typedef validate_reed_solomon_code_parameters vrscp; + + enum { code_length = code_length_ }; + enum { fec_length = fec_length_ }; + enum { data_length = data_length_ }; + }; + + } + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_erasure_channel.hpp b/hsmodem/fec/schifra_erasure_channel.hpp new file mode 100644 index 0000000..194107a --- /dev/null +++ b/hsmodem/fec/schifra_erasure_channel.hpp @@ -0,0 +1,256 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ERASURE_CHANNEL_HPP +#define INCLUDE_SCHIFRA_ERASURE_CHANNEL_HPP + + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + inline void interleaved_stack_erasure_mapper(const std::vector& missing_row_index, + std::vector& erasure_row_list) + { + erasure_row_list.resize(block_length); + + for (std::size_t i = 0; i < block_length; ++i) + { + erasure_row_list[i].reserve(fec_length); + } + + for (std::size_t i = 0; i < missing_row_index.size(); ++i) + { + for (std::size_t j = 0; j < block_length; ++j) + { + erasure_row_list[j].push_back(missing_row_index[i]); + } + } + } + + template + inline bool erasure_channel_stack_encode(const encoder& encoder, + block (&output)[code_length]) + { + for (std::size_t i = 0; i < code_length; ++i) + { + if (!encoder.encode(output[i])) + { + std::cout << "erasure_channel_stack_encode() - Error: Failed to encode block[" << i <<"]" << std::endl; + + return false; + } + } + + interleave(output); + + return true; + } + + template + class erasure_code_decoder : public decoder + { + public: + + typedef decoder decoder_type; + typedef typename decoder_type::block_type block_type; + typedef std::vector polynomial_list_type; + + erasure_code_decoder(const galois::field& gfield, + const unsigned int& gen_initial_index) + : decoder(gfield, gen_initial_index) + { + for (std::size_t i = 0; i < code_length; ++i) + { + received_.push_back(galois::field_polynomial(decoder_type::field_, code_length - 1)); + syndrome_.push_back(galois::field_polynomial(decoder_type::field_)); + } + }; + + bool decode(block_type rsblock[code_length], const erasure_locations_t& erasure_list) const + { + if ( + (!decoder_type::decoder_valid_) || + (erasure_list.size() != fec_length) + ) + { + return false; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + decoder_type::load_message (received_[i], rsblock [i]); + decoder_type::compute_syndrome(received_[i], syndrome_[i]); + } + + erasure_locations_t erasure_locations; + decoder_type::prepare_erasure_list(erasure_locations,erasure_list); + + galois::field_polynomial gamma(galois::field_element(decoder_type::field_, 1)); + + decoder_type::compute_gamma(gamma,erasure_locations); + + std::vector gamma_roots; + + find_roots_in_data(gamma,gamma_roots); + + polynomial_list_type omega; + + for (std::size_t i = 0; i < code_length; ++i) + { + omega.push_back((gamma * syndrome_[i]) % fec_length); + } + + galois::field_polynomial gamma_derivative = gamma.derivative(); + + for (std::size_t i = 0; i < gamma_roots.size(); ++i) + { + int error_location = static_cast(gamma_roots[i]); + galois::field_symbol alpha_inverse = decoder_type::field_.alpha(error_location); + galois::field_element denominator = gamma_derivative(alpha_inverse); + + if (denominator == 0) + { + return false; + } + + for (std::size_t j = 0; j < code_length; ++j) + { + galois::field_element numerator = (omega[j](alpha_inverse) * decoder_type::root_exponent_table_[error_location]); + /* + A minor optimization can be made in the event the + numerator is equal to zero by not executing the + following line. + */ + rsblock[j][error_location - 1] ^= decoder_type::field_.div(numerator.poly(),denominator.poly()); + } + } + + return true; + } + + private: + + void find_roots_in_data(const galois::field_polynomial& poly, std::vector& root_list) const + { + /* + Chien Search, as described in parent, but only + for locations within the data range of the message. + */ + root_list.reserve(fec_length << 1); + root_list.resize(0); + + std::size_t polynomial_degree = poly.deg(); + std::size_t root_list_size = 0; + + for (int i = 1; i <= static_cast(data_length); ++i) + { + if (0 == poly(decoder_type::field_.alpha(i)).poly()) + { + root_list.push_back(i); + root_list_size++; + + if (root_list_size == polynomial_degree) + { + break; + } + } + } + } + + mutable polynomial_list_type received_; + mutable polynomial_list_type syndrome_; + + }; + + template + inline bool erasure_channel_stack_decode(const decoder& general_decoder, + const erasure_locations_t& missing_row_index, + block (&output)[code_length]) + { + if (missing_row_index.empty()) + { + return true; + } + + interleave(output); + + for (std::size_t i = 0; i < code_length; ++i) + { + if (!general_decoder.decode(output[i],missing_row_index)) + { + std::cout << "[2] erasure_channel_stack_decode() - Error: Failed to decode block[" << i <<"]" << std::endl; + + return false; + } + } + + return true; + } + + template + inline bool erasure_channel_stack_decode(const erasure_code_decoder& erasure_decoder, + const erasure_locations_t& missing_row_index, + block (&output)[code_length]) + { + /* + Note: 1. Missing row indicies must be unique. + 2. Missing row indicies must exist within + the stack's size. + 3. There will be NO errors in the rows (aka output) + 4. The information members of the blocks will + not be utilized. + There are NO exceptions to these rules! + */ + if (missing_row_index.empty()) + { + return true; + } + else if (missing_row_index.size() == fec_length) + { + interleave(output); + + return erasure_decoder.decode(output,missing_row_index); + } + else + return erasure_channel_stack_decode( + static_cast&>(erasure_decoder), + missing_row_index, + output); + } + + } // namespace reed_solomon + +} // namepsace schifra + + +#endif diff --git a/hsmodem/fec/schifra_error_processes.hpp b/hsmodem/fec/schifra_error_processes.hpp new file mode 100644 index 0000000..d2f61fe --- /dev/null +++ b/hsmodem/fec/schifra_error_processes.hpp @@ -0,0 +1,602 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ERROR_PROCESSES_HPP +#define INCLUDE_SCHIFRA_ERROR_PROCESSES_HPP + + +#include +#include +#include +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + template + inline void add_erasure_error(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0xFF; // Or one can simply equate to zero + } + + template + inline void add_error(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0xFF; + } + + template + inline void add_error_4bit_symbol(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0x0F; + } + + template + inline void corrupt_message_all_errors00(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < (fec_length >> 1); ++i) + { + add_error((start_position + scale * i) % code_length,rsblock); + } + } + + template + inline void corrupt_message_all_errors_wth_mask(reed_solomon::block& rsblock, + const std::size_t& start_position, + const int& mask, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < (fec_length >> 1); ++i) + { + std::size_t position = (start_position + scale * i) % code_length; + rsblock[position] = (~rsblock[position]) & mask; + + } + } + + template + inline void corrupt_message_all_errors(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < error_count; ++i) + { + add_error((start_position + scale * i) % code_length,rsblock); + } + } + + template + inline void corrupt_message_all_erasures00(reed_solomon::block& rsblock, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + for (std::size_t i = 0; i < fec_length; ++i) + { + std::size_t error_position = (start_position + scale * i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + } + + template + inline void corrupt_message_all_erasures(reed_solomon::block& rsblock, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t erasure_count, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + for (std::size_t i = 0; i < erasure_count; ++i) + { + /* Note: Must make sure duplicate erasures are not added */ + std::size_t error_position = (start_position + scale * i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + } + + namespace error_mode + { + enum type + { + errors_erasures, // Errors first then erasures + erasures_errors // Erasures first then errors + }; + } + + template + inline void corrupt_message_errors_erasures(reed_solomon::block& rsblock, + const error_mode::type& mode, + const std::size_t& start_position, + const std::size_t& erasure_count, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t between_space = 0) + { + std::size_t error_count = (fec_length - erasure_count) >> 1; + + if ((2 * error_count) + erasure_count > fec_length) + { + std::cout << "corrupt_message_errors_erasures() - ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + std::size_t error_position = 0; + + switch (mode) + { + case error_mode::erasures_errors : { + for (std::size_t i = 0; i < erasure_count; ++i) + { + error_position = (start_position + i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < error_count; ++i) + { + error_position = (start_position + erasure_count + between_space + i) % code_length; + add_error(error_position,rsblock); + } + } + break; + + case error_mode::errors_erasures : { + for (std::size_t i = 0; i < error_count; ++i) + { + error_position = (start_position + i) % code_length; + add_error(error_position,rsblock); + } + + for (std::size_t i = 0; i < erasure_count; ++i) + { + error_position = (start_position + error_count + between_space + i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + } + break; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + + } + + template + inline void corrupt_message_interleaved_errors_erasures(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& erasure_count, + reed_solomon::erasure_locations_t& erasure_list) + { + std::size_t error_count = (fec_length - erasure_count) >> 1; + + if ((2 * error_count) + erasure_count > fec_length) + { + std::cout << "corrupt_message_interleaved_errors_erasures() - [1] ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + std::size_t e = 0; + std::size_t s = 0; + std::size_t i = 0; + + while ((e < error_count) || (s < erasure_count) || (i < (error_count + erasure_count))) + { + std::size_t error_position = (start_position + i) % code_length; + + if (((i & 0x01) == 0) && (s < erasure_count)) + { + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + s++; + } + else if (((i & 0x01) == 1) && (e < error_count)) + { + e++; + add_error(error_position,rsblock); + } + ++i; + } + + for (std::size_t j = 0; j < code_length; ++j) + { + if (erasures[j] == 1) erasure_list.push_back(j); + } + + if ((2 * e) + erasure_list.size() > fec_length) + { + std::cout << "corrupt_message_interleaved_errors_erasures() - [2] ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + } + + namespace details + { + template + struct corrupt_message_all_errors_segmented_impl + { + static void process(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& distance_between_blocks = 1) + { + std::size_t block_1_error_count = (fec_length >> 2); + std::size_t block_2_error_count = (fec_length >> 1) - block_1_error_count; + + for (std::size_t i = 0; i < block_1_error_count; ++i) + { + add_error((start_position + i) % code_length,rsblock); + } + + std::size_t new_start_position = (start_position + (block_1_error_count)) + distance_between_blocks; + + for (std::size_t i = 0; i < block_2_error_count; ++i) + { + add_error((new_start_position + i) % code_length,rsblock); + } + } + }; + + template + struct corrupt_message_all_errors_segmented_impl + { + static void process(reed_solomon::block&, + const std::size_t&, const std::size_t&) + {} + }; + } + + template + inline void corrupt_message_all_errors_segmented(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& distance_between_blocks = 1) + { + details::corrupt_message_all_errors_segmented_impl 2)>:: + process(rsblock,start_position,distance_between_blocks); + } + + inline bool check_for_duplicate_erasures(const std::vector& erasure_list) + { + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + for (std::size_t j = i + 1; j < erasure_list.size(); ++j) + { + if (erasure_list[i] == erasure_list[j]) + { + return false; + } + } + } + + return true; + } + + inline void dump_erasure_list(const schifra::reed_solomon::erasure_locations_t& erasure_list) + { + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + std::cout << "[" << i << "," << erasure_list[i] << "] "; + } + + std::cout << std::endl; + } + + template + inline bool is_block_equivelent(const reed_solomon::block& rsblock, + const std::string& data, + const bool display = false, + const bool all_errors = false) + { + std::string::const_iterator it = data.begin(); + + bool error_found = false; + + for (std::size_t i = 0; i < code_length - fec_length; ++i, ++it) + { + if (static_cast(rsblock.data[i] & 0xFF) != (*it)) + { + error_found = true; + + if (display) + { + printf("is_block_equivelent() - Error at loc : %02d\td1: %02X\td2: %02X\n", + static_cast(i), + rsblock.data[i], + static_cast(*it)); + } + + if (!all_errors) + return false; + } + } + + return !error_found; + } + + template + inline bool are_blocks_equivelent(const reed_solomon::block& block1, + const reed_solomon::block& block2, + const std::size_t span = code_length, + const bool display = false, + const bool all_errors = false) + { + bool error_found = false; + + for (std::size_t i = 0; i < span; ++i) + { + if (block1[i] != block2[i]) + { + error_found = true; + + if (display) + { + printf("are_blocks_equivelent() - Error at loc : %02d\td1: %04X\td2: %04X\n", + static_cast(i), + block1[i], + block2[i]); + } + + if (!all_errors) + return false; + } + } + + return !error_found; + } + + template + inline bool block_stacks_equivelent(const reed_solomon::block block_stack1[stack_size], + const reed_solomon::block block_stack2[stack_size]) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + if (!are_blocks_equivelent(block_stack1[i],block_stack2[i])) + { + return false; + } + } + + return true; + } + + template + inline bool block_stacks_equivelent(const reed_solomon::data_block block_stack1[stack_size], + const reed_solomon::data_block block_stack2[stack_size]) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + for (std::size_t j = 0; j < block_length; ++j) + { + if (block_stack1[i][j] != block_stack2[i][j]) + { + return false; + } + } + } + + return true; + } + + inline void corrupt_file_with_burst_errors(const std::string& file_name, + const long& start_position, + const long& burst_length) + { + if (!schifra::fileio::file_exists(file_name)) + { + std::cout << "corrupt_file() - Error: " << file_name << " does not exist!" << std::endl; + return; + } + + if (static_cast(start_position + burst_length) >= schifra::fileio::file_size(file_name)) + { + std::cout << "corrupt_file() - Error: Burst error out of bounds." << std::endl; + return; + } + + std::vector data(burst_length); + + std::ifstream ifile(file_name.c_str(), std::ios::in | std::ios::binary); + + if (!ifile) + { + return; + } + + ifile.seekg(start_position,std::ios_base::beg); + ifile.read(&data[0],burst_length); + ifile.close(); + + for (long i = 0; i < burst_length; ++i) + { + data[i] = ~data[i]; + } + + std::ofstream ofile(file_name.c_str(), std::ios::in | std::ios::out | std::ios::binary); + + if (!ofile) + { + return; + } + + ofile.seekp(start_position,std::ios_base::beg); + ofile.write(&data[0],burst_length); + ofile.close(); + } + + static const std::size_t global_random_error_index[] = + { + 13, 170, 148, 66, 228, 208, 182, 92, + 4, 137, 97, 99, 237, 151, 15, 0, + 119, 243, 41, 222, 33, 211, 188, 5, + 44, 30, 210, 111, 54, 79, 61, 223, + 239, 149, 73, 115, 201, 234, 194, 62, + 147, 70, 19, 49, 72, 52, 164, 29, + 102, 225, 203, 153, 18, 205, 40, 217, + 165, 177, 166, 134, 236, 68, 231, 154, + 116, 136, 47, 240, 46, 89, 120, 183, + 242, 28, 161, 226, 241, 230, 10, 131, + 207, 132, 83, 171, 202, 195, 227, 206, + 112, 88, 90, 146, 117, 180, 26, 78, + 118, 254, 107, 110, 220, 7, 192, 187, + 31, 175, 127, 209, 32, 12, 84, 128, + 190, 156, 95, 105, 104, 246, 91, 215, + 219, 142, 36, 186, 247, 233, 167, 133, + 160, 16, 140, 169, 23, 96, 155, 235, + 179, 76, 253, 103, 238, 67, 35, 121, + 100, 27, 213, 58, 77, 248, 174, 39, + 214, 56, 42, 200, 106, 21, 129, 114, + 252, 113, 168, 53, 25, 216, 64, 232, + 81, 75, 2, 224, 250, 60, 135, 204, + 48, 196, 94, 63, 244, 191, 93, 126, + 138, 159, 9, 85, 249, 34, 185, 163, + 17, 65, 184, 82, 109, 172, 108, 69, + 150, 3, 20, 221, 162, 212, 152, 59, + 198, 74, 229, 55, 87, 178, 141, 199, + 57, 130, 80, 173, 101, 122, 144, 51, + 139, 11, 8, 125, 158, 124, 123, 37, + 14, 24, 22, 43, 197, 50, 98, 6, + 176, 251, 86, 218, 193, 71, 145, 1, + 45, 38, 189, 143, 245, 157, 181 + }; + + static const std::size_t error_index_size = sizeof(global_random_error_index) / sizeof(std::size_t); + + template + inline void corrupt_message_all_errors_at_index(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& error_index_start_position, + const bool display_positions = false) + { + schifra::reed_solomon::block tmp_rsblock = rsblock; + + for (std::size_t i = 0; i < error_count; ++i) + { + std::size_t error_position = (global_random_error_index[(error_index_start_position + i) % error_index_size]) % code_length; + + add_error(error_position,rsblock); + + if (display_positions) + { + std::cout << "Error index: " << error_position << std::endl; + } + } + } + + template + inline void corrupt_message_all_errors_at_index(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& error_index_start_position, + const std::vector& random_error_index, + const bool display_positions = false) + { + for (std::size_t i = 0; i < error_count; ++i) + { + std::size_t error_position = (random_error_index[(error_index_start_position + i) % random_error_index.size()]) % code_length; + + add_error(error_position,rsblock); + + if (display_positions) + { + std::cout << "Error index: " << error_position << std::endl; + } + } + } + + inline void generate_error_index(const std::size_t index_size, + std::vector& random_error_index, + std::size_t seed) + { + if (0 == seed) + { + seed = 0xA5A5A5A5; + } + + ::srand(static_cast(seed)); + + std::deque index_list; + + for (std::size_t i = 0; i < index_size; ++i) + { + index_list.push_back(i); + } + + random_error_index.reserve(index_size); + random_error_index.resize(0); + + while (!index_list.empty()) + { + // possibly the worst way of doing this. + std::size_t index = ::rand() % index_list.size(); + + random_error_index.push_back(index_list[index]); + index_list.erase(index_list.begin() + index); + } + } + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_fileio.hpp b/hsmodem/fec/schifra_fileio.hpp new file mode 100644 index 0000000..00443a1 --- /dev/null +++ b/hsmodem/fec/schifra_fileio.hpp @@ -0,0 +1,227 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_FILEIO_HPP +#define INCLUDE_SCHIFRA_FILEIO_HPP + + +#include +#include +#include +#include +#include + +#include "schifra_crc.hpp" + + +namespace schifra +{ + + namespace fileio + { + + inline void read_into_vector(const std::string& file_name, std::vector& buffer) + { + std::ifstream file(file_name.c_str()); + if (!file) return; + std::string line; + while (std::getline(file,line)) + { + buffer.push_back(line); + } + file.close(); + } + + inline void write_from_vector(const std::string& file_name, const std::vector& buffer) + { + std::ofstream file(file_name.c_str()); + if (!file) return; + std::ostream_iterator os(file,"\n"); + std::copy(buffer.begin(),buffer.end(), os); + file.close(); + } + + inline bool file_exists(const std::string& file_name) + { + std::ifstream file(file_name.c_str(), std::ios::binary); + return ((!file) ? false : true); + } + + inline std::size_t file_size(const std::string& file_name) + { + std::ifstream file(file_name.c_str(),std::ios::binary); + if (!file) return 0; + file.seekg (0, std::ios::end); + return static_cast(file.tellg()); + } + + inline void load_file(const std::string& file_name, std::string& buffer) + { + std::ifstream file(file_name.c_str(), std::ios::binary); + if (!file) return; + buffer.assign(std::istreambuf_iterator(file),std::istreambuf_iterator()); + file.close(); + } + + inline void load_file(const std::string& file_name, char** buffer, std::size_t& buffer_size) + { + std::ifstream in_stream(file_name.c_str(),std::ios::binary); + if (!in_stream) return; + buffer_size = file_size(file_name); + *buffer = new char[buffer_size]; + in_stream.read(*buffer,static_cast(buffer_size)); + in_stream.close(); + } + + inline void write_file(const std::string& file_name, const std::string& buffer) + { + std::ofstream file(file_name.c_str(),std::ios::binary); + file << buffer; + file.close(); + } + + inline void write_file(const std::string& file_name, char* buffer, const std::size_t& buffer_size) + { + std::ofstream out_stream(file_name.c_str(),std::ios::binary); + if (!out_stream) return; + out_stream.write(buffer,static_cast(buffer_size)); + out_stream.close(); + } + + inline bool copy_file(const std::string& src_file_name, const std::string& dest_file_name) + { + std::ifstream src_file(src_file_name.c_str(),std::ios::binary); + std::ofstream dest_file(dest_file_name.c_str(),std::ios::binary); + if (!src_file) return false; + if (!dest_file) return false; + + const std::size_t block_size = 1024; + char buffer[block_size]; + + std::size_t remaining_bytes = file_size(src_file_name); + + while (remaining_bytes >= block_size) + { + src_file.read(&buffer[0],static_cast(block_size)); + dest_file.write(&buffer[0],static_cast(block_size)); + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + src_file.read(&buffer[0],static_cast(remaining_bytes)); + dest_file.write(&buffer[0],static_cast(remaining_bytes)); + remaining_bytes = 0; + } + + src_file.close(); + dest_file.close(); + + return true; + } + + inline bool files_identical(const std::string& file_name1, const std::string& file_name2) + { + std::ifstream file1(file_name1.c_str(),std::ios::binary); + std::ifstream file2(file_name2.c_str(),std::ios::binary); + if (!file1) return false; + if (!file2) return false; + if (file_size(file_name1) != file_size(file_name2)) return false; + + const std::size_t block_size = 1024; + char buffer1[block_size]; + char buffer2[block_size]; + + std::size_t remaining_bytes = file_size(file_name1); + + while (remaining_bytes >= block_size) + { + file1.read(&buffer1[0],static_cast(block_size)); + file2.read(&buffer2[0],static_cast(block_size)); + + for (std::size_t i = 0; i < block_size; ++i) + { + if (buffer1[i] != buffer2[i]) + { + return false; + } + } + + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + file1.read(&buffer1[0],static_cast(remaining_bytes)); + file2.read(&buffer2[0],static_cast(remaining_bytes)); + + for (std::size_t i = 0; i < remaining_bytes; ++i) + { + if (buffer1[i] != buffer2[i]) + { + return false; + } + } + + remaining_bytes = 0; + } + + file1.close(); + file2.close(); + + return true; + } + + inline std::size_t file_crc(crc32& crc_module, const std::string& file_name) + { + std::ifstream file(file_name.c_str(),std::ios::binary); + if (!file) return 0; + + const std::size_t block_size = 1024; + char buffer[block_size]; + + std::size_t remaining_bytes = file_size(file_name); + + crc_module.reset(); + + while (remaining_bytes >= block_size) + { + file.read(&buffer[0],static_cast(block_size)); + crc_module.update(buffer,block_size); + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + file.read(&buffer[0],static_cast(remaining_bytes)); + crc_module.update(buffer,remaining_bytes); + remaining_bytes = 0; + } + + return crc_module.crc(); + } + + } // namespace fileio + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_galois_field.hpp b/hsmodem/fec/schifra_galois_field.hpp new file mode 100644 index 0000000..ec7ee3a --- /dev/null +++ b/hsmodem/fec/schifra_galois_field.hpp @@ -0,0 +1,518 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_HPP + + +#include +#include +#include +#include +#include + + +namespace schifra +{ + + namespace galois + { + + typedef int field_symbol; + const field_symbol GFERROR = -1; + + class field + { + public: + + field(const int pwr, const std::size_t primpoly_deg, const unsigned int* primitive_poly); + ~field(); + + bool operator==(const field& gf) const; + bool operator!=(const field& gf) const; + + inline field_symbol index(const field_symbol value) const + { + return index_of_[value]; + } + + inline field_symbol alpha(const field_symbol value) const + { + return alpha_to_[value]; + } + + inline unsigned int size() const + { + return field_size_; + } + + inline unsigned int pwr() const + { + return power_; + } + + inline unsigned int mask() const + { + return field_size_; + } + + inline field_symbol add(const field_symbol& a, const field_symbol& b) const + { + return (a ^ b); + } + + inline field_symbol sub(const field_symbol& a, const field_symbol& b) const + { + return (a ^ b); + } + + inline field_symbol normalize(field_symbol x) const + { + while (x < 0) + { + x += static_cast(field_size_); + } + + while (x >= static_cast(field_size_)) + { + x -= static_cast(field_size_); + x = (x >> power_) + (x & field_size_); + } + + return x; + } + + inline field_symbol mul(const field_symbol& a, const field_symbol& b) const + { + #if !defined(NO_GFLUT) + return mul_table_[a][b]; + #else + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] + index_of_[b])]; + #endif + } + + inline field_symbol div(const field_symbol& a, const field_symbol& b) const + { + #if !defined(NO_GFLUT) + return div_table_[a][b]; + #else + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] - index_of_[b] + field_size_)]; + #endif + } + + inline field_symbol exp(const field_symbol& a, int n) const + { + #if !defined(NO_GFLUT) + if (n >= 0) + return exp_table_[a][n & field_size_]; + else + { + while (n < 0) n += field_size_; + + return (n ? exp_table_[a][n] : 1); + } + #else + if (a != 0) + { + if (n < 0) + { + while (n < 0) n += field_size_; + return (n ? alpha_to_[normalize(index_of_[a] * n)] : 1); + } + else if (n) + return alpha_to_[normalize(index_of_[a] * static_cast(n))]; + else + return 1; + } + else + return 0; + #endif + } + + #ifdef LINEAR_EXP_LUT + inline field_symbol* const linear_exp(const field_symbol& a) const + { + #if !defined(NO_GFLUT) + static const field_symbol upper_bound = 2 * field_size_; + if ((a >= 0) && (a <= upper_bound)) + return linear_exp_table_[a]; + else + return reinterpret_cast(0); + #else + return reinterpret_cast(0); + #endif + } + #endif + + inline field_symbol inverse(const field_symbol& val) const + { + #if !defined(NO_GFLUT) + return mul_inverse_[val]; + #else + return alpha_to_[normalize(field_size_ - index_of_[val])]; + #endif + } + + inline unsigned int prim_poly_term(const unsigned int index) const + { + return prim_poly_[index]; + } + + friend std::ostream& operator << (std::ostream& os, const field& gf); + + private: + + field(); + field(const field& gfield); + field& operator=(const field& gfield); + + void generate_field(const unsigned int* prim_poly_); + field_symbol gen_mul (const field_symbol& a, const field_symbol& b) const; + field_symbol gen_div (const field_symbol& a, const field_symbol& b) const; + field_symbol gen_exp (const field_symbol& a, const std::size_t& n) const; + field_symbol gen_inverse (const field_symbol& val) const; + + std::size_t create_array(char buffer_[], + const std::size_t& length, + const std::size_t offset, + field_symbol** array); + + std::size_t create_2d_array(char buffer_[], + std::size_t row_cnt, std::size_t col_cnt, + const std::size_t offset, + field_symbol*** array); + unsigned int power_; + std::size_t prim_poly_deg_; + unsigned int field_size_; + unsigned int prim_poly_hash_; + unsigned int* prim_poly_; + field_symbol* alpha_to_; // aka exponential or anti-log + field_symbol* index_of_; // aka log + field_symbol* mul_inverse_; // multiplicative inverse + field_symbol** mul_table_; + field_symbol** div_table_; + field_symbol** exp_table_; + field_symbol** linear_exp_table_; + char* buffer_; + }; + + inline field::field(const int pwr, const std::size_t primpoly_deg, const unsigned int* primitive_poly) + : power_(pwr), + prim_poly_deg_(primpoly_deg), + field_size_((1 << power_) - 1) + { + alpha_to_ = new field_symbol [field_size_ + 1]; + index_of_ = new field_symbol [field_size_ + 1]; + + #if !defined(NO_GFLUT) + + #ifdef LINEAR_EXP_LUT + static const std::size_t buffer_size = ((6 * (field_size_ + 1) * (field_size_ + 1)) + ((field_size_ + 1) * 2)) * sizeof(field_symbol); + #else + static const std::size_t buffer_size = ((4 * (field_size_ + 1) * (field_size_ + 1)) + ((field_size_ + 1) * 2)) * sizeof(field_symbol); + #endif + + buffer_ = new char[buffer_size]; + std::size_t offset = 0; + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&mul_table_); + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&div_table_); + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&exp_table_); + + #ifdef LINEAR_EXP_LUT + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1) * 2,offset,&linear_exp_table_); + #else + linear_exp_table_ = 0; + #endif + + offset = create_array(buffer_,(field_size_ + 1) * 2,offset,&mul_inverse_); + + #else + + buffer_ = 0; + mul_table_ = 0; + div_table_ = 0; + exp_table_ = 0; + mul_inverse_ = 0; + linear_exp_table_ = 0; + + #endif + + prim_poly_ = new unsigned int [prim_poly_deg_ + 1]; + + for (unsigned int i = 0; i < (prim_poly_deg_ + 1); ++i) + { + prim_poly_[i] = primitive_poly[i]; + } + + prim_poly_hash_ = 0xAAAAAAAA; + + for (std::size_t i = 0; i < (prim_poly_deg_ + 1); ++i) + { + prim_poly_hash_ += ((i & 1) == 0) ? ( (prim_poly_hash_ << 7) ^ primitive_poly[i] * (prim_poly_hash_ >> 3)) : + (~((prim_poly_hash_ << 11) + (primitive_poly[i] ^ (prim_poly_hash_ >> 5)))); + } + + generate_field(primitive_poly); + } + + inline field::~field() + { + if (0 != alpha_to_) { delete [] alpha_to_; alpha_to_ = 0; } + if (0 != index_of_) { delete [] index_of_; index_of_ = 0; } + if (0 != prim_poly_) { delete [] prim_poly_; prim_poly_ = 0; } + + #if !defined(NO_GFLUT) + + if (0 != mul_table_) { delete [] mul_table_; mul_table_ = 0; } + if (0 != div_table_) { delete [] div_table_; div_table_ = 0; } + if (0 != exp_table_) { delete [] exp_table_; exp_table_ = 0; } + + #ifdef LINEAR_EXP_LUT + if (0 != linear_exp_table_) { delete [] linear_exp_table_; linear_exp_table_ = 0; } + #endif + + if (0 != buffer_) { delete [] buffer_; buffer_ = 0; } + + #endif + } + + inline bool field::operator==(const field& gf) const + { + return ( + (this->power_ == gf.power_) && + (this->prim_poly_hash_ == gf.prim_poly_hash_) + ); + } + + inline bool field::operator!=(const field& gf) const + { + return !field::operator ==(gf); + } + + inline void field::generate_field(const unsigned int* prim_poly) + { + /* + Note: It is assumed that the degree of the primitive + polynomial will be equivelent to the m value as + in GF(2^m) + */ + + field_symbol mask = 1; + + alpha_to_[power_] = 0; + + for (field_symbol i = 0; i < static_cast(power_); ++i) + { + alpha_to_[i] = mask; + index_of_[alpha_to_[i]] = i; + + if (prim_poly[i] != 0) + { + alpha_to_[power_] ^= mask; + } + + mask <<= 1; + } + + index_of_[alpha_to_[power_]] = power_; + + mask >>= 1; + + for (field_symbol i = power_ + 1; i < static_cast(field_size_); ++i) + { + if (alpha_to_[i - 1] >= mask) + alpha_to_[i] = alpha_to_[power_] ^ ((alpha_to_[i - 1] ^ mask) << 1); + else + alpha_to_[i] = alpha_to_[i - 1] << 1; + + index_of_[alpha_to_[i]] = i; + } + + index_of_[0] = GFERROR; + alpha_to_[field_size_] = 1; + + #if !defined(NO_GFLUT) + + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + for (field_symbol j = 0; j < static_cast(field_size_ + 1); ++j) + { + mul_table_[i][j] = gen_mul(i,j); + div_table_[i][j] = gen_div(i,j); + exp_table_[i][j] = gen_exp(i,j); + } + } + + #ifdef LINEAR_EXP_LUT + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + for (int j = 0; j < static_cast(2 * field_size_); ++j) + { + linear_exp_table_[i][j] = gen_exp(i,j); + } + } + #endif + + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + mul_inverse_[i] = gen_inverse(i); + mul_inverse_[i + (field_size_ + 1)] = mul_inverse_[i]; + } + + #endif + } + + inline field_symbol field::gen_mul(const field_symbol& a, const field_symbol& b) const + { + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] + index_of_[b])]; + } + + inline field_symbol field::gen_div(const field_symbol& a, const field_symbol& b) const + { + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] - index_of_[b] + field_size_)]; + } + + inline field_symbol field::gen_exp(const field_symbol& a, const std::size_t& n) const + { + if (a != 0) + return ((n == 0) ? 1 : alpha_to_[normalize(index_of_[a] * static_cast(n))]); + else + return 0; + } + + inline field_symbol field::gen_inverse(const field_symbol& val) const + { + return alpha_to_[normalize(field_size_ - index_of_[val])]; + } + + inline std::size_t field::create_array(char buffer[], + const std::size_t& length, + const std::size_t offset, + field_symbol** array) + { + const std::size_t row_size = length * sizeof(field_symbol); + (*array) = new(buffer + offset)field_symbol[length]; + return row_size + offset; + } + + inline std::size_t field::create_2d_array(char buffer[], + std::size_t row_cnt, std::size_t col_cnt, + const std::size_t offset, + field_symbol*** array) + { + const std::size_t row_size = col_cnt * sizeof(field_symbol); + char* buffer__offset = buffer + offset; + (*array) = new field_symbol* [row_cnt]; + for (std::size_t i = 0; i < row_cnt; ++i) + { + (*array)[i] = new(buffer__offset + (i * row_size))field_symbol[col_cnt]; + } + return (row_cnt * row_size) + offset; + } + + inline std::ostream& operator << (std::ostream& os, const field& gf) + { + for (std::size_t i = 0; i < (gf.field_size_ + 1); ++i) + { + os << i << "\t" << gf.alpha_to_[i] << "\t" << gf.index_of_[i] << std::endl; + } + + return os; + } + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 */ + const unsigned int primitive_polynomial00[] = {1, 1, 0, 1}; + const unsigned int primitive_polynomial_size00 = 4; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 1x^4*/ + const unsigned int primitive_polynomial01[] = {1, 1, 0, 0, 1}; + const unsigned int primitive_polynomial_size01 = 5; + + /* 1x^0 + 0x^1 + 1x^2 + 0x^3 + 0x^4 + 1x^5 */ + const unsigned int primitive_polynomial02[] = {1, 0, 1, 0, 0, 1}; + const unsigned int primitive_polynomial_size02 = 6; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 1x^6 */ + const unsigned int primitive_polynomial03[] = {1, 1, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size03 = 7; + + /* 1x^0 + 0x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 1x^7 */ + const unsigned int primitive_polynomial04[] = {1, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size04 = 8; + + /* 1x^0 + 0x^1 + 1x^2 + 1x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 1x^8 */ + const unsigned int primitive_polynomial05[] = {1, 0, 1, 1, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size05 = 9; + + /* 1x^0 + 1x^1 + 1x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 1x^7 + 1x^8 */ + const unsigned int primitive_polynomial06[] = {1, 1, 1, 0, 0, 0, 0, 1, 1}; + const unsigned int primitive_polynomial_size06 = 9; + + /* 1x^0 + 0x^1 + 0x^2 + 0x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 1x^9 */ + const unsigned int primitive_polynomial07[] = {1, 0, 0, 0, 1, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size07 = 10; + + /* 1x^0 + 0x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 1x^10 */ + const unsigned int primitive_polynomial08[] = {1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size08 = 11; + + /* 1x^0 + 0x^1 + 1x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 1x^11 */ + const unsigned int primitive_polynomial09[] = {1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size09 = 12; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 1x^4 + 0x^5 + 1x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 1x^12 */ + const unsigned int primitive_polynomial10[] = {1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size10 = 13; + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 0x^12 + 1x^13 */ + const unsigned int primitive_polynomial11[] = {1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size11 = 14; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 1x^6 + 0x^7 + 0x^8 + 0x^9 + 1x^10 + 0x^11 + 0x^12 + 0x^13 + 1x^14 */ + const unsigned int primitive_polynomial12[] = {1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size12 = 15; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 0x^12 + 0x^13 + 0x^14 + 1x^15 */ + const unsigned int primitive_polynomial13[] = {1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size13 = 16; + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 1x^12 + 0x^13 + 0x^14 + 0x^15 + 1x^16 */ + const unsigned int primitive_polynomial14[] = {1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size14 = 17; + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_galois_field_element.hpp b/hsmodem/fec/schifra_galois_field_element.hpp new file mode 100644 index 0000000..e6aa89b --- /dev/null +++ b/hsmodem/fec/schifra_galois_field_element.hpp @@ -0,0 +1,277 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_ELEMENT_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_ELEMENT_HPP + + +#include +#include + +#include "schifra_galois_field.hpp" + + +namespace schifra +{ + + namespace galois + { + + class field_element + { + public: + + field_element(const field& gfield) + : field_(gfield), + poly_value_(-1) + {} + + field_element(const field& gfield,const field_symbol& v) + : field_(const_cast(gfield)), + poly_value_(v) + {} + + field_element(const field_element& gfe) + : field_(const_cast(gfe.field_)), + poly_value_(gfe.poly_value_) + {} + + ~field_element() + {} + + inline field_element& operator = (const field_element& gfe) + { + if ((this != &gfe) && (&field_ == &gfe.field_)) + { + poly_value_ = gfe.poly_value_; + } + + return *this; + } + + inline field_element& operator = (const field_symbol& v) + { + poly_value_ = v & field_.size(); + return *this; + } + + inline field_element& operator += (const field_element& gfe) + { + poly_value_ ^= gfe.poly_value_; + return *this; + } + + inline field_element& operator += (const field_symbol& v) + { + poly_value_ ^= v; + return *this; + } + + inline field_element& operator -= (const field_element& gfe) + { + *this += gfe; + return *this; + } + + inline field_element& operator -= (const field_symbol& v) + { + *this += v; + return *this; + } + + inline field_element& operator *= (const field_element& gfe) + { + poly_value_ = field_.mul(poly_value_, gfe.poly_value_); + return *this; + } + + inline field_element& operator *= (const field_symbol& v) + { + poly_value_ = field_.mul(poly_value_, v); + return *this; + } + + inline field_element& operator /= (const field_element& gfe) + { + poly_value_ = field_.div(poly_value_, gfe.poly_value_); + return *this; + } + + inline field_element& operator /= (const field_symbol& v) + { + poly_value_ = field_.div(poly_value_, v); + return *this; + } + + inline field_element& operator ^= (const int& n) + { + poly_value_ = field_.exp(poly_value_,n); + return *this; + } + + inline bool operator == (const field_element& gfe) const + { + return ((field_ == gfe.field_) && (poly_value_ == gfe.poly_value_)); + } + + inline bool operator == (const field_symbol& v) const + { + return (poly_value_ == v); + } + + inline bool operator != (const field_element& gfe) const + { + return ((field_ != gfe.field_) || (poly_value_ != gfe.poly_value_)); + } + + inline bool operator != (const field_symbol& v) const + { + return (poly_value_ != v); + } + + inline bool operator < (const field_element& gfe) + { + return (poly_value_ < gfe.poly_value_); + } + + inline bool operator < (const field_symbol& v) + { + return (poly_value_ < v); + } + + inline bool operator > (const field_element& gfe) + { + return (poly_value_ > gfe.poly_value_); + } + + inline bool operator > (const field_symbol& v) + { + return (poly_value_ > v); + } + + inline field_symbol index() const + { + return field_.index(poly_value_); + } + + inline field_symbol poly() const + { + return poly_value_; + } + + inline field_symbol& poly() + { + return poly_value_; + } + + inline const field& galois_field() const + { + return field_; + } + + inline field_symbol inverse() const + { + return field_.inverse(poly_value_); + } + + inline void normalize() + { + poly_value_ &= field_.size(); + } + + friend std::ostream& operator << (std::ostream& os, const field_element& gfe); + + private: + + const field& field_; + field_symbol poly_value_; + + }; + + inline field_element operator + (const field_element& a, const field_element& b); + inline field_element operator - (const field_element& a, const field_element& b); + inline field_element operator * (const field_element& a, const field_element& b); + inline field_element operator * (const field_element& a, const field_symbol& b); + inline field_element operator * (const field_symbol& a, const field_element& b); + inline field_element operator / (const field_element& a, const field_element& b); + inline field_element operator ^ (const field_element& a, const int& b); + + inline std::ostream& operator << (std::ostream& os, const field_element& gfe) + { + os << gfe.poly_value_; + return os; + } + + inline field_element operator + (const field_element& a, const field_element& b) + { + field_element result = a; + result += b; + return result; + } + + inline field_element operator - (const field_element& a, const field_element& b) + { + field_element result = a; + result -= b; + return result; + } + + inline field_element operator * (const field_element& a, const field_element& b) + { + field_element result = a; + result *= b; + return result; + } + + inline field_element operator * (const field_element& a, const field_symbol& b) + { + field_element result = a; + result *= b; + return result; + } + + inline field_element operator * (const field_symbol& a, const field_element& b) + { + field_element result = b; + result *= a; + return result; + } + + inline field_element operator / (const field_element& a, const field_element& b) + { + field_element result = a; + result /= b; + return result; + } + + inline field_element operator ^ (const field_element& a, const int& b) + { + field_element result = a; + result ^= b; + return result; + } + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_galois_field_polynomial.hpp b/hsmodem/fec/schifra_galois_field_polynomial.hpp new file mode 100644 index 0000000..63ff7d1 --- /dev/null +++ b/hsmodem/fec/schifra_galois_field_polynomial.hpp @@ -0,0 +1,839 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_POLYNOMIAL_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_POLYNOMIAL_HPP + + +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" + + +namespace schifra +{ + + namespace galois + { + + class field_polynomial + { + public: + + field_polynomial(const field& gfield); + field_polynomial(const field& gfield, const unsigned int& degree); + field_polynomial(const field& gfield, const unsigned int& degree, const field_element element[]); + field_polynomial(const field_polynomial& polynomial); + field_polynomial(const field_element& gfe); + ~field_polynomial() {} + + bool valid() const; + int deg() const; + const field& galois_field() const; + void set_degree(const unsigned int& x); + void simplify(); + + field_polynomial& operator = (const field_polynomial& polynomial); + field_polynomial& operator = (const field_element& element); + field_polynomial& operator += (const field_polynomial& element); + field_polynomial& operator += (const field_element& element); + field_polynomial& operator -= (const field_polynomial& element); + field_polynomial& operator -= (const field_element& element); + field_polynomial& operator *= (const field_polynomial& polynomial); + field_polynomial& operator *= (const field_element& element); + field_polynomial& operator /= (const field_polynomial& divisor); + field_polynomial& operator /= (const field_element& element); + field_polynomial& operator %= (const field_polynomial& divisor); + field_polynomial& operator %= (const unsigned int& power); + field_polynomial& operator ^= (const unsigned int& n); + field_polynomial& operator <<= (const unsigned int& n); + field_polynomial& operator >>= (const unsigned int& n); + + field_element& operator[] (const std::size_t& term); + field_element operator() (const field_element& value); + field_element operator() (field_symbol value); + + const field_element& operator[](const std::size_t& term) const; + const field_element operator()(const field_element& value) const; + const field_element operator()(field_symbol value) const; + + bool operator==(const field_polynomial& polynomial) const; + bool operator!=(const field_polynomial& polynomial) const; + + bool monic() const; + + field_polynomial derivative() const; + + friend std::ostream& operator << (std::ostream& os, const field_polynomial& polynomial); + + private: + + typedef std::vector::iterator poly_iter; + typedef std::vector::const_iterator const_poly_iter; + + void simplify(field_polynomial& polynomial) const; + + field& field_; + std::vector poly_; + }; + + field_polynomial operator + (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator + (const field_polynomial& a, const field_element& b); + field_polynomial operator + (const field_element& a, const field_polynomial& b); + field_polynomial operator + (const field_polynomial& a, const field_symbol& b); + field_polynomial operator + (const field_symbol& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_element& b); + field_polynomial operator - (const field_element& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_symbol& b); + field_polynomial operator - (const field_symbol& a, const field_polynomial& b); + field_polynomial operator * (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator * (const field_element& a, const field_polynomial& b); + field_polynomial operator * (const field_polynomial& a, const field_element& b); + field_polynomial operator / (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator / (const field_polynomial& a, const field_element& b); + field_polynomial operator % (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator % (const field_polynomial& a, const unsigned int& power); + field_polynomial operator ^ (const field_polynomial& a, const int& n); + field_polynomial operator <<(const field_polynomial& a, const unsigned int& n); + field_polynomial operator >>(const field_polynomial& a, const unsigned int& n); + field_polynomial gcd(const field_polynomial& a, const field_polynomial& b); + + inline field_polynomial::field_polynomial(const field& gfield) + : field_(const_cast(gfield)) + { + poly_.clear(); + poly_.reserve(256); + } + + inline field_polynomial::field_polynomial(const field& gfield, const unsigned int& degree) + : field_(const_cast(gfield)) + { + poly_.reserve(256); + poly_.resize(degree + 1,field_element(field_,0)); + } + + inline field_polynomial::field_polynomial(const field& gfield, const unsigned int& degree, const field_element element[]) + : field_(const_cast(gfield)) + { + poly_.reserve(256); + + if (element != NULL) + { + /* + It is assumed that element is an array of field elements + with size/element count of degree + 1. + */ + for (unsigned int i = 0; i <= degree; ++i) + { + poly_.push_back(element[i]); + } + } + else + poly_.resize(degree + 1, field_element(field_, 0)); + } + + inline field_polynomial::field_polynomial(const field_polynomial& polynomial) + : field_(const_cast(polynomial.field_)), + poly_ (polynomial.poly_) + {} + + inline field_polynomial::field_polynomial(const field_element& element) + : field_(const_cast(element.galois_field())) + { + poly_.resize(1,element); + } + + inline bool field_polynomial::valid() const + { + return (poly_.size() > 0); + } + + inline int field_polynomial::deg() const + { + return static_cast(poly_.size()) - 1; + } + + inline const field& field_polynomial::galois_field() const + { + return field_; + } + + inline void field_polynomial::set_degree(const unsigned int& x) + { + poly_.resize(x - 1,field_element(field_,0)); + } + + inline field_polynomial& field_polynomial::operator = (const field_polynomial& polynomial) + { + if ((this != &polynomial) && (&field_ == &(polynomial.field_))) + { + poly_ = polynomial.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator = (const field_element& element) + { + if (&field_ == &(element.galois_field())) + { + poly_.resize(1,element); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator += (const field_polynomial& polynomial) + { + if (&field_ == &(polynomial.field_)) + { + if (poly_.size() < polynomial.poly_.size()) + { + const_poly_iter it0 = polynomial.poly_.begin(); + + for (poly_iter it1 = poly_.begin(); it1 != poly_.end(); ++it0, ++it1) + { + (*it1) += (*it0); + } + + while (it0 != polynomial.poly_.end()) + { + poly_.push_back(*it0); + ++it0; + } + } + else + { + poly_iter it0 = poly_.begin(); + + for (const_poly_iter it1 = polynomial.poly_.begin(); it1 != polynomial.poly_.end(); ++it0, ++it1) + { + (*it0) += (*it1); + } + } + + simplify(*this); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator += (const field_element& element) + { + poly_[0] += element; + return *this; + } + + inline field_polynomial& field_polynomial::operator -= (const field_polynomial& element) + { + return (*this += element); + } + + inline field_polynomial& field_polynomial::operator -= (const field_element& element) + { + poly_[0] -= element; + return *this; + } + + inline field_polynomial& field_polynomial::operator *= (const field_polynomial& polynomial) + { + if (&field_ == &(polynomial.field_)) + { + field_polynomial product(field_,deg() + polynomial.deg() + 1); + + poly_iter result_it = product.poly_.begin(); + + for (poly_iter it0 = poly_.begin(); it0 != poly_.end(); ++it0) + { + poly_iter current_result_it = result_it; + + for (const_poly_iter it1 = polynomial.poly_.begin(); it1 != polynomial.poly_.end(); ++it1) + { + (*current_result_it) += (*it0) * (*it1); + ++current_result_it; + } + + ++result_it; + } + + simplify(product); + poly_ = product.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator *= (const field_element& element) + { + if (field_ == element.galois_field()) + { + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it) + { + (*it) *= element; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator /= (const field_polynomial& divisor) + { + if ( + (&field_ == &divisor.field_) && + (deg() >= divisor.deg()) && + (divisor.deg() >= 0) + ) + { + field_polynomial quotient (field_, deg() - divisor.deg() + 1); + field_polynomial remainder(field_, divisor.deg() - 1); + + for (int i = static_cast(deg()); i >= 0; i--) + { + if (i <= static_cast(quotient.deg())) + { + quotient[i] = remainder[remainder.deg()] / divisor[divisor.deg()]; + + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1] + (quotient[i] * divisor[j]); + } + + remainder[0] = poly_[i] + (quotient[i] * divisor[0]); + } + else + { + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1]; + } + + remainder[0] = poly_[i]; + } + } + + simplify(quotient); + poly_ = quotient.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator /= (const field_element& element) + { + if (field_ == element.galois_field()) + { + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it) + { + (*it) /= element; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator %= (const field_polynomial& divisor) + { + if ( + (field_ == divisor.field_) && + (deg() >= divisor.deg() ) && + (divisor.deg() >= 0 ) + ) + { + field_polynomial quotient (field_, deg() - divisor.deg() + 1); + field_polynomial remainder(field_, divisor.deg() - 1); + + for (int i = static_cast(deg()); i >= 0; i--) + { + if (i <= static_cast(quotient.deg())) + { + quotient[i] = remainder[remainder.deg()] / divisor[divisor.deg()]; + + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1] + (quotient[i] * divisor[j]); + } + + remainder[0] = poly_[i] + (quotient[i] * divisor[0]); + } + else + { + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1]; + } + + remainder[0] = poly_[i]; + } + } + + poly_ = remainder.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator %= (const unsigned int& power) + { + if (poly_.size() >= power) + { + poly_.resize(power,field_element(field_,0)); + simplify(*this); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator ^= (const unsigned int& n) + { + field_polynomial result = *this; + + for (std::size_t i = 0; i < n; ++i) + { + result *= *this; + } + + *this = result; + + return *this; + } + + inline field_polynomial& field_polynomial::operator <<= (const unsigned int& n) + { + if (poly_.size() > 0) + { + size_t initial_size = poly_.size(); + + poly_.resize(poly_.size() + n, field_element(field_,0)); + + for (size_t i = initial_size - 1; static_cast(i) >= 0; --i) + { + poly_[i + n] = poly_[i]; + } + + for (unsigned int i = 0; i < n; ++i) + { + poly_[i] = 0; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator >>= (const unsigned int& n) + { + if (n <= poly_.size()) + { + for (unsigned int i = 0; i <= deg() - n; ++i) + { + poly_[i] = poly_[i + n]; + } + + poly_.resize(poly_.size() - n,field_element(field_,0)); + } + else if (static_cast(n) >= (deg() + 1)) + { + poly_.resize(0,field_element(field_,0)); + } + + return *this; + } + + inline const field_element& field_polynomial::operator [] (const std::size_t& term) const + { + assert(term < poly_.size()); + return poly_[term]; + } + + inline field_element& field_polynomial::operator [] (const std::size_t& term) + { + assert(term < poly_.size()); + return poly_[term]; + } + + inline field_element field_polynomial::operator () (const field_element& value) + { + field_element result(field_,0); + + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + field_symbol value_poly_form = value.poly(); + + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value_poly_form,i), (*it).poly()); + } + + result = total_sum; + } + + return result; + } + + inline const field_element field_polynomial::operator () (const field_element& value) const + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + field_symbol value_poly_form = value.poly(); + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value_poly_form,i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline field_element field_polynomial::operator () (field_symbol value) + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value,i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline const field_element field_polynomial::operator () (field_symbol value) const + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value, i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline bool field_polynomial::operator == (const field_polynomial& polynomial) const + { + if (field_ == polynomial.field_) + { + if (poly_.size() != polynomial.poly_.size()) + return false; + else + { + const_poly_iter it0 = polynomial.poly_.begin(); + + for (const_poly_iter it1 = poly_.begin(); it1 != poly_.end(); ++it0, ++it1) + { + if ((*it0) != (*it1)) + return false; + } + + return true; + } + } + else + return false; + } + + inline bool field_polynomial::operator != (const field_polynomial& polynomial) const + { + return !(*this == polynomial); + } + + inline field_polynomial field_polynomial::derivative() const + { + if ((*this).poly_.size() > 1) + { + field_polynomial deriv(field_,deg()); + + const std::size_t upper_bound = poly_.size() - 1; + + for (std::size_t i = 0; i < upper_bound; i += 2) + { + deriv.poly_[i] = poly_[i + 1]; + } + + simplify(deriv); + return deriv; + } + + return field_polynomial(field_,0); + } + + inline bool field_polynomial::monic() const + { + return (poly_[poly_.size() - 1] == static_cast(1)); + } + + inline void field_polynomial::simplify() + { + simplify(*this); + } + + inline void field_polynomial::simplify(field_polynomial& polynomial) const + { + std::size_t poly_size = polynomial.poly_.size(); + + if ((poly_size > 0) && (polynomial.poly_.back() == 0)) + { + poly_iter it = polynomial.poly_.end (); + poly_iter begin = polynomial.poly_.begin(); + + std::size_t count = 0; + + while ((begin != it) && (*(--it) == 0)) + { + ++count; + } + + if (0 != count) + { + polynomial.poly_.resize(poly_size - count, field_element(field_,0)); + } + } + } + + inline field_polynomial operator + (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result += b; + return result; + } + + inline field_polynomial operator + (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result += b; + return result; + } + + inline field_polynomial operator + (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result += a; + return result; + } + + inline field_polynomial operator + (const field_polynomial& a, const field_symbol& b) + { + return a + field_element(a.galois_field(),b); + } + + inline field_polynomial operator + (const field_symbol& a, const field_polynomial& b) + { + return b + field_element(b.galois_field(),a); + } + + inline field_polynomial operator - (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result -= b; + return result; + } + + inline field_polynomial operator - (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result -= b; + return result; + } + + inline field_polynomial operator - (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result -= a; + return result; + } + + inline field_polynomial operator - (const field_polynomial& a, const field_symbol& b) + { + return a - field_element(a.galois_field(),b); + } + + inline field_polynomial operator - (const field_symbol& a, const field_polynomial& b) + { + return b - field_element(b.galois_field(),a); + } + + inline field_polynomial operator * (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result *= b; + return result; + } + + inline field_polynomial operator * (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result *= a; + return result; + } + + inline field_polynomial operator * (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result *= b; + return result; + } + + inline field_polynomial operator / (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result /= b; + return result; + } + + inline field_polynomial operator / (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result /= b; + return result; + } + + inline field_polynomial operator % (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result %= b; + return result; + } + + inline field_polynomial operator % (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result %= n; + return result; + } + + inline field_polynomial operator ^ (const field_polynomial& a, const int& n) + { + field_polynomial result = a; + result ^= n; + return result; + } + + inline field_polynomial operator << (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result <<= n; + return result; + } + + inline field_polynomial operator >> (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result >>= n; + return result; + } + + inline field_polynomial gcd(const field_polynomial& a, const field_polynomial& b) + { + if (&a.galois_field() == &b.galois_field()) + { + if ((!a.valid()) && (!b.valid())) + { + field_polynomial error_polynomial(a.galois_field()); + return error_polynomial; + } + + if (!a.valid()) return b; + if (!b.valid()) return a; + + field_polynomial x = a % b; + field_polynomial y = b; + field_polynomial z = x; + + while ((z = (y % x)).valid()) + { + y = x; + x = z; + } + return x; + } + else + { + field_polynomial error_polynomial(a.galois_field()); + return error_polynomial; + } + } + + inline field_polynomial generate_X(const field& gfield) + { + const field_element xgfe[2] = { + galois::field_element(gfield, 0), + galois::field_element(gfield, 1) + }; + + field_polynomial X_(gfield,1,xgfe); + + return X_; + } + + inline std::ostream& operator << (std::ostream& os, const field_polynomial& polynomial) + { + if (polynomial.deg() >= 0) + { + /* + for (unsigned int i = 0; i < polynomial.poly_.size(); ++i) + { + os << polynomial.poly[i].index() + << ((i != (polynomial.deg())) ? " " : ""); + } + + std::cout << " poly form: "; + */ + + for (unsigned int i = 0; i < polynomial.poly_.size(); ++i) + { + os << polynomial.poly_[i].poly() + << " " + << "x^" + << i + << ((static_cast(i) != (polynomial.deg())) ? " + " : ""); + } + } + + return os; + } + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_galois_utilities.hpp b/hsmodem/fec/schifra_galois_utilities.hpp new file mode 100644 index 0000000..e3c9f3e --- /dev/null +++ b/hsmodem/fec/schifra_galois_utilities.hpp @@ -0,0 +1,115 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_UTILITIES_HPP +#define INCLUDE_SCHIFRA_GALOIS_UTILITIES_HPP + + +#include +#include +#include +#include +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" + + +namespace schifra +{ + + namespace galois + { + + inline std::string convert_to_string(const unsigned int& value, const unsigned int& width) + { + std::stringstream stream; + stream << std::setw(width) << std::setfill('0') << value; + return stream.str(); + } + + inline std::string convert_to_string(const int& value, const unsigned int& width) + { + std::stringstream stream; + stream << std::setw(width) << std::setfill('0') << value; + return stream.str(); + } + + inline std::string convert_to_bin(const unsigned int& value, const unsigned int& field_descriptor) + { + std::string output = std::string(field_descriptor, ' '); + + for (unsigned int i = 0; i < field_descriptor; ++i) + { + output[i] = ((((value >> (field_descriptor - 1 - i)) & 1) == 1) ? '1' : '0'); + } + + return output; + } + + inline void alpha_table(std::ostream& os, const field& gf) + { + std::vector str_list; + + for (unsigned int i = 0; i < gf.size() + 1; ++i) + { + str_list.push_back("alpha^" + convert_to_string(gf.index(i),2) + "\t" + + convert_to_bin (i,gf.pwr()) + "\t" + + convert_to_string(gf.alpha(i),2)); + } + + std::sort(str_list.begin(),str_list.end()); + std::copy(str_list.begin(),str_list.end(),std::ostream_iterator(os,"\n")); + } + + inline void polynomial_alpha_form(std::ostream& os, const field_polynomial& polynomial) + { + for (int i = 0; i < (polynomial.deg() + 1); ++i) + { + field_symbol alpha_power = polynomial.galois_field().index(polynomial[i].poly()); + + if (alpha_power != 0) + os << static_cast(224) << "^" << convert_to_string(alpha_power,2); + else + os << 1; + + os << " * " + << "x^" + << i + << ((i != (polynomial.deg())) ? " + " : ""); + } + } + + inline void polynomial_alpha_form(std::ostream& os, const std::string& prepend, const field_polynomial& polynomial) + { + os << prepend; + polynomial_alpha_form(os,polynomial); + os << std::endl; + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_bitio.hpp b/hsmodem/fec/schifra_reed_solomon_bitio.hpp new file mode 100644 index 0000000..6130d47 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_bitio.hpp @@ -0,0 +1,201 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_BITIO_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_BITIO_HPP + + +#include + + +namespace schifra +{ + + namespace reed_solomon + { + + namespace bitio + { + + template class convert_data_to_symbol; + + template <> + class convert_data_to_symbol<2> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, s_it+=4) + { + (* s_it ) = (*d_it) & 0x03; + (*(s_it + 1)) = ((*d_it) >> 2) & 0x03; + (*(s_it + 2)) = ((*d_it) >> 4) & 0x03; + (*(s_it + 3)) = ((*d_it) >> 6) & 0x03; + } + } + }; + + template <> + class convert_data_to_symbol<4> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, s_it+=2) + { + (* s_it ) = (*d_it) & 0x0F; + (*(s_it + 1)) = ((*d_it) >> 4) & 0x0F; + } + } + }; + + template <> + class convert_data_to_symbol<8> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*s_it) = (*d_it) & 0xFF; + } + } + }; + + template <> + class convert_data_to_symbol<16> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; i+=2, d_it+=2, ++s_it) + { + (*s_it) = (*d_it) & 0x000000FF; + (*s_it) |= (static_cast((*(d_it + 1))) << 8) & 0x0000FF00; + } + } + }; + + template <> + class convert_data_to_symbol<24> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; i+=3, d_it+=3, ++s_it) + { + (*s_it) |= (*d_it) & 0x000000FF; + (*s_it) |= (static_cast((*(d_it + 1))) << 8) & 0x0000FF00; + (*s_it) |= (static_cast((*(d_it + 2))) << 16) & 0x00FF0000; + } + } + }; + + template class convert_symbol_to_data; + + template <> + class convert_symbol_to_data<4> + { + public: + + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = (*s_it) & 0x0000000F; + (*d_it) |= ((*(s_it + 1)) & 0x0000000F) << 4; + } + } + }; + + template <> + class convert_symbol_to_data<8> + { + public: + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = static_cast((*s_it) & 0xFF); + } + } + }; + + template <> + class convert_symbol_to_data<16> + { + public: + + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = (*s_it) & 0xFFFF; + } + } + }; + + } // namespace bitio + + } // namespace reed_solomon + +} // namespace schifra + + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_block.hpp b/hsmodem/fec/schifra_reed_solomon_block.hpp new file mode 100644 index 0000000..ec1852c --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_block.hpp @@ -0,0 +1,382 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_BLOCK_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_BLOCK_HPP + + +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + struct block + { + public: + + typedef galois::field_symbol symbol_type; + typedef traits::reed_solomon_triat trait; + typedef traits::symbol symbol; + typedef block block_t; + + enum error_t + { + e_no_error = 0, + e_encoder_error0 = 1, + e_encoder_error1 = 2, + e_decoder_error0 = 3, + e_decoder_error1 = 4, + e_decoder_error2 = 5, + e_decoder_error3 = 6, + e_decoder_error4 = 7 + }; + + block() + : errors_detected (0), + errors_corrected(0), + zero_numerators (0), + unrecoverable(false), + error(e_no_error) + { + traits::validate_reed_solomon_block_parameters(); + } + + block(const std::string& _data, const std::string& _fec) + : errors_detected (0), + errors_corrected(0), + zero_numerators (0), + unrecoverable(false), + error(e_no_error) + { + traits::validate_reed_solomon_block_parameters(); + + for (std::size_t i = 0; i < data_length; ++i) + { + data[i] = static_cast(_data[i]); + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + data[i + data_length] = static_cast(_fec[i]); + } + } + + galois::field_symbol& operator[](const std::size_t& index) + { + return data[index]; + } + + const galois::field_symbol& operator[](const std::size_t& index) const + { + return data[index]; + } + + galois::field_symbol& operator()(const std::size_t& index) + { + return operator[](index); + } + + galois::field_symbol& fec(const std::size_t& index) + { + return data[data_length + index]; + } + + bool data_to_string(std::string& data_str) const + { + if (data_str.length() != data_length) + { + return false; + } + + for (std::size_t i = 0; i < data_length; ++i) + { + data_str[i] = static_cast(data[i]); + } + + return true; + } + + bool fec_to_string(std::string& fec_str) const + { + if (fec_str.length() != fec_length) + { + return false; + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + fec_str[i] = static_cast(data[data_length + i]); + } + + return true; + } + + std::string fec_to_string() const + { + std::string fec_str(fec_length,0x00); + fec_to_string(fec_str); + return fec_str; + } + + void clear(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < code_length; ++i) + { + data[i] = value; + } + } + + void clear_data(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < data_length; ++i) + { + data[i] = value; + } + } + + void clear_fec(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + data[data_length + i] = value; + } + } + + void reset(galois::field_symbol value = 0) + { + clear(value); + errors_detected = 0; + errors_corrected = 0; + zero_numerators = 0; + unrecoverable = false; + error = e_no_error; + } + + template + void copy_state(const BlockType& b) + { + errors_detected = b.errors_detected; + errors_corrected = b.errors_corrected; + zero_numerators = b.zero_numerators; + unrecoverable = b.unrecoverable; + error = static_cast(b.error); + } + + inline std::string error_as_string() const + { + switch (error) + { + case e_no_error : return "No Error"; + case e_encoder_error0 : return "Invalid Encoder"; + case e_encoder_error1 : return "Incompatible Generator Polynomial"; + case e_decoder_error0 : return "Invalid Decoder"; + case e_decoder_error1 : return "Decoder Failure - Non-zero Syndrome"; + case e_decoder_error2 : return "Decoder Failure - Too Many Errors/Erasures"; + case e_decoder_error3 : return "Decoder Failure - Invalid Symbol Correction"; + case e_decoder_error4 : return "Decoder Failure - Invalid Codeword Correction"; + default : return "Invalid Error Code"; + } + } + + std::size_t errors_detected; + std::size_t errors_corrected; + std::size_t zero_numerators; + bool unrecoverable; + error_t error; + galois::field_symbol data[code_length]; + }; + + template + inline void copy(const block& src_block, block& dest_block) + { + for (std::size_t index = 0; index < code_length; ++index) + { + dest_block.data[index] = src_block.data[index]; + } + } + + template + inline void copy(const T src_data[], block& dest_block) + { + for (std::size_t index = 0; index < (code_length - fec_length); ++index, ++src_data) + { + dest_block.data[index] = static_cast::symbol_type>(*src_data); + } + } + + template + inline void copy(const T src_data[], + const std::size_t& src_length, + block& dest_block) + { + for (std::size_t index = 0; index < src_length; ++index, ++src_data) + { + dest_block.data[index] = static_cast::symbol_type>(*src_data); + } + } + + template + inline void copy(const block src_block_stack[stack_size], + block dest_block_stack[stack_size]) + { + for (std::size_t row = 0; row < stack_size; ++row) + { + copy(src_block_stack[row], dest_block_stack[row]); + } + } + + template + inline bool copy(const T src_data[], + const std::size_t src_length, + block dest_block_stack[stack_size]) + { + const std::size_t data_length = code_length - fec_length; + + if (src_length > (stack_size * data_length)) + { + return false; + } + + const std::size_t row_count = src_length / data_length; + + for (std::size_t row = 0; row < row_count; ++row, src_data += data_length) + { + copy(src_data, dest_block_stack[row]); + } + + if ((src_length % data_length) != 0) + { + copy(src_data, src_length % data_length, dest_block_stack[row_count]); + } + + return true; + } + + template + inline void full_copy(const block& src_block, + T dest_data[]) + { + for (std::size_t i = 0; i < code_length; ++i, ++dest_data) + { + (*dest_data) = static_cast(src_block[i]); + } + } + + template + inline void copy(const block src_block_stack[stack_size], + T dest_data[]) + { + const std::size_t data_length = code_length - fec_length; + + for (std::size_t i = 0; i < stack_size; ++i) + { + for (std::size_t j = 0; j < data_length; ++j, ++dest_data) + { + (*dest_data) = static_cast(src_block_stack[i][j]); + } + } + } + + template + inline std::ostream& operator<<(std::ostream& os, const block& rs_block) + { + for (std::size_t i = 0; i < code_length; ++i) + { + os << static_cast(rs_block[i]); + } + + return os; + } + + template + struct data_block + { + public: + + typedef T value_type; + + T& operator[](const std::size_t index) { return data[index]; } + const T& operator[](const std::size_t index) const { return data[index]; } + + T* begin() { return data; } + const T* begin() const { return data; } + + T* end() { return data + block_length; } + const T* end() const { return data + block_length; } + + void clear(T value = 0) + { + for (std::size_t i = 0; i < block_length; ++i) + { + data[i] = value; + } + } + + private: + + T data[block_length]; + }; + + template + inline void copy(const data_block& src_block, data_block& dest_block) + { + for (std::size_t index = 0; index < block_length; ++index) + { + dest_block[index] = src_block[index]; + } + } + + template + inline void copy(const data_block src_block_stack[stack_size], + data_block dest_block_stack[stack_size]) + { + for (std::size_t row = 0; row < stack_size; ++row) + { + copy(src_block_stack[row], dest_block_stack[row]); + } + } + + template + inline void full_copy(const data_block& src_block, T dest_data[]) + { + for (std::size_t i = 0; i < block_length; ++i, ++dest_data) + { + (*dest_data) = static_cast(src_block[i]); + } + } + + typedef std::vector erasure_locations_t; + + } // namespace reed_solomon + +} // namepsace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_codec_validator.hpp b/hsmodem/fec/schifra_reed_solomon_codec_validator.hpp new file mode 100644 index 0000000..3057c39 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_codec_validator.hpp @@ -0,0 +1,998 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_CODEC_VALIDATOR_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_CODEC_VALIDATOR_HPP + + +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_ecc_traits.hpp" +#include "schifra_error_processes.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template , + typename decoder_type = decoder, + std::size_t data_length = code_length - fec_length> + class codec_validator + { + public: + + typedef block block_type; + + codec_validator(const galois::field& gf, + const unsigned int gpii, + const std::string& msg) + : field_(gf), + generator_polynomial_(galois::field_polynomial(field_)), + rs_encoder_(reinterpret_cast(0)), + rs_decoder_(reinterpret_cast(0)), + message(msg), + genpoly_initial_index_(gpii), + blocks_processed_(0), + block_failures_(0) + { + traits::equivalent_encoder_decoder(); + + if ( + !make_sequential_root_generator_polynomial(field_, + genpoly_initial_index_, + fec_length, + generator_polynomial_) + ) + { + return; + } + + rs_encoder_ = new encoder_type(field_,generator_polynomial_); + rs_decoder_ = new decoder_type(field_,genpoly_initial_index_); + + if (!rs_encoder_->encode(message,rs_block_original)) + { + std::cout << "codec_validator() - ERROR: Encoding process failed!" << std::endl; + return; + } + } + + bool execute() + { + schifra::utils::timer timer; + timer.start(); + + bool result = stage1() && + stage2() && + stage3() && + stage4() && + stage5() && + stage6() && + stage7() && + stage8() && + stage9() && + stage10() && + stage11() && + stage12() ; + + timer.stop(); + + double time = timer.time(); + + print_codec_properties(); + std::cout << "Blocks decoded: " << blocks_processed_ << + "\tDecoding Failures: " << block_failures_ << + "\tRate: " << ((blocks_processed_ * data_length) * 8.0) / (1048576.0 * time) << "Mbps" << std::endl; + /* + Note: The throughput rate is not only the throughput of reed solomon + encoding and decoding, but also that of the steps needed to add + simulated transmission errors to the reed solomon block such as + the calculation of the positions and additions of errors and + erasures to the reed solomon block, which normally in a true + data transmission medium would not be taken into consideration. + */ + return result; + } + + ~codec_validator() + { + delete rs_encoder_; + delete rs_decoder_; + } + + void print_codec_properties() + { + std::cout << "Codec: RS(" << code_length << "," << data_length << "," << fec_length <<") "; + } + + private: + + bool stage1() + { + /* Burst Error Only Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors + ( + rs_block, + error_count, + start_position, + 1 + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage1() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage1() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage1() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage1() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage1() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage2() + { + /* Burst Erasure Only Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_erasures + ( + rs_block, + erasure_list, + erasure_count, + start_position, + 1 + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage2() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + std::cout << "stage2() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage2() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != erasure_count) + { + print_codec_properties(); + std::cout << "stage2() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != erasure_count) + { + print_codec_properties(); + std::cout << "stage2() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage3() + { + /* Consecutive Burst Erasure and Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::erasures_errors, + start_position,erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage3() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage3() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage3() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage4() + { + /* Consecutive Burst Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::errors_erasures, + start_position, + erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage4() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage4() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage4() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage5() + { + /* Distanced Burst Erasure and Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t between_distance = 1; between_distance <= 10; ++between_distance) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::erasures_errors, + start_position, + erasure_count, + erasure_list, + between_distance + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage5() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage5() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage5() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage6() + { + /* Distanced Burst Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t between_distance = 1; between_distance <= 10; ++between_distance) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::errors_erasures, + start_position, + erasure_count, + erasure_list,between_distance + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage6() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage6() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage6() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage7() + { + /* Intermittent Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t error_count = 1; error_count < (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t scale = 1; scale < 5; ++scale) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors + ( + rs_block, + error_count, + start_position, + scale + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage7() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage7() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage7() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage7() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage7() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage8() + { + /* Intermittent Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t scale = 4; scale < 5; ++scale) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_erasures + ( + rs_block, + erasure_list, + erasure_count, + start_position, + scale + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage8() - Decoding Failure! start position: " << start_position << "\t scale: " << scale << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage8() - Error Correcting Failure! start position: " << start_position << "\t scale: " << scale < erasure_count) + { + print_codec_properties(); + std::cout << "stage8() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected > erasure_count) + { + print_codec_properties(); + std::cout << "stage8() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage9() + { + /* Burst Interleaved Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_interleaved_errors_erasures + ( + rs_block, + start_position, + erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage9() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage9() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage9() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage10() + { + /* Segmented Burst Errors */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t distance_between_blocks = 0; distance_between_blocks < 5; ++distance_between_blocks) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors_segmented + ( + rs_block, + start_position, + distance_between_blocks + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage10() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage10() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage10() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage11() + { + /* No Errors */ + + const std::size_t initial_failure_count = block_failures_; + + block_type rs_block = rs_block_original; + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage11() - Decoding Failure!" << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != 0) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != 0) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.unrecoverable) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + + return (block_failures_ == initial_failure_count); + } + + bool stage12() + { + /* Random Errors Only */ + + const std::size_t initial_failure_count = block_failures_; + + std::vector random_error_index; + generate_error_index((fec_length >> 1),random_error_index,0xA5A5A5A5); + + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t error_index = 0; error_index < error_index_size; ++error_index) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors_at_index + ( + rs_block, + error_count, + error_index, + random_error_index + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage12() - Decoding Failure! error index: " << error_index << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage12() - Error Correcting Failure! error index: " << error_index << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage12() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage12() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage12() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + protected: + + codec_validator() {} + + private: + + codec_validator(const codec_validator&); + const codec_validator& operator=(const codec_validator&); + + const galois::field& field_; + galois::field_polynomial generator_polynomial_; + encoder_type* rs_encoder_; + decoder_type* rs_decoder_; + block_type rs_block_original; + const std::string& message; + const unsigned int genpoly_initial_index_; + unsigned int blocks_processed_; + unsigned int block_failures_; + }; + + template + void create_messages(std::vector& message_list, const bool full_test_set = false) + { + /* Various message bit patterns */ + + message_list.clear(); + + if (full_test_set) + { + for (std::size_t i = 0; i < 256; ++i) + { + message_list.push_back(std::string(data_length, static_cast(i))); + } + } + else + { + message_list.push_back(std::string(data_length,static_cast(0x00))); + message_list.push_back(std::string(data_length,static_cast(0xAA))); + message_list.push_back(std::string(data_length,static_cast(0xA5))); + message_list.push_back(std::string(data_length,static_cast(0xAC))); + message_list.push_back(std::string(data_length,static_cast(0xCA))); + message_list.push_back(std::string(data_length,static_cast(0x5A))); + message_list.push_back(std::string(data_length,static_cast(0xCC))); + message_list.push_back(std::string(data_length,static_cast(0xF0))); + message_list.push_back(std::string(data_length,static_cast(0x0F))); + message_list.push_back(std::string(data_length,static_cast(0xFF))); + message_list.push_back(std::string(data_length,static_cast(0x92))); + message_list.push_back(std::string(data_length,static_cast(0x6D))); + message_list.push_back(std::string(data_length,static_cast(0x77))); + message_list.push_back(std::string(data_length,static_cast(0x7A))); + message_list.push_back(std::string(data_length,static_cast(0xA7))); + message_list.push_back(std::string(data_length,static_cast(0xE5))); + message_list.push_back(std::string(data_length,static_cast(0xEB))); + } + + std::string tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0xFF); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0xFF)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0x00); + } + + message_list.push_back(tmp_str); + } + + template + inline bool codec_validation_test(const std::size_t prim_poly_size,const unsigned int prim_poly[]) + { + const unsigned int data_length = code_length - fec_length; + + galois::field field(field_descriptor,prim_poly_size,prim_poly); + std::vector message_list; + create_messages(message_list); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + codec_validator + validator(field, gen_poly_index, message_list[i]); + + if (!validator.execute()) + { + return false; + } + } + + return true; + } + + template + inline bool shortened_codec_validation_test(const std::size_t prim_poly_size,const unsigned int prim_poly[]) + { + typedef shortened_encoder encoder_type; + typedef shortened_decoder decoder_type; + + const unsigned int data_length = code_length - fec_length; + + galois::field field(field_descriptor,prim_poly_size,prim_poly); + std::vector message_list; + create_messages(message_list); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + codec_validator + validator(field,gen_poly_index,message_list[i]); + + if (!validator.execute()) + { + return false; + } + + } + + return true; + } + + inline bool codec_validation_test00() + { + return codec_validation_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 22>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 24>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) ; + } + + inline bool codec_validation_test01() + { + return shortened_codec_validation_test<8,120,126,14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 1 RS Code */ + shortened_codec_validation_test<8,120,194,16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 2 RS Code */ + shortened_codec_validation_test<8,120,219,18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 3 RS Code */ + shortened_codec_validation_test<8,120,225,20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 4 RS Code */ + shortened_codec_validation_test<8, 1,204,16>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* DBV/MPEG-2 TSP RS Code */ + shortened_codec_validation_test<8, 1,104,27>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* Magnetic Storage Outer RS Code */ + shortened_codec_validation_test<8, 1,204,12>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* Magnetic Storage Inner RS Code */ + shortened_codec_validation_test<8,120, 72,10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) ; /* VDL Mode 3 RS Code */ + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_decoder.hpp b/hsmodem/fec/schifra_reed_solomon_decoder.hpp new file mode 100755 index 0000000..1b32a72 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_decoder.hpp @@ -0,0 +1,499 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_DECODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_DECODER_HPP + + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + namespace reed_solomon + { + + template + class decoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + decoder(const galois::field& field, const unsigned int& gen_initial_index = 0) + : decoder_valid_(field.size() == code_length), + field_(field), + X_(galois::generate_X(field_)), + gen_initial_index_(gen_initial_index) + { + if (decoder_valid_) + { + //Note: code_length and field size can be used interchangeably + create_lookup_tables(); + } + }; + + const galois::field& field() const + { + return field_; + } + + bool decode(block_type& rsblock) const + { + std::vector erasure_list; + return decode(rsblock,erasure_list); + } + + bool decode(block_type& rsblock, const erasure_locations_t& erasure_list) const + { + if ((!decoder_valid_) || (erasure_list.size() > fec_length)) + { + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error0; + + return false; + } + + galois::field_polynomial received(field_,code_length - 1); + load_message(received,rsblock); + + galois::field_polynomial syndrome(field_); + + if (compute_syndrome(received,syndrome) == 0) + { + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = false; + + return true; + } + + galois::field_polynomial lambda(galois::field_element(field_,1)); + + erasure_locations_t erasure_locations; + + if (!erasure_list.empty()) + { + prepare_erasure_list(erasure_locations, erasure_list); + + compute_gamma(lambda, erasure_locations); + } + + if (erasure_list.size() < fec_length) + { + modified_berlekamp_massey_algorithm(lambda, syndrome, erasure_list.size()); + } + + std::vector error_locations; + + find_roots(lambda, error_locations); + + if (0 == error_locations.size()) + { + /* + Syndrome is non-zero yet no error locations have + been obtained, conclusion: + It is possible that there are MORE errrors in the + message than can be detected and corrected for this + particular code. + */ + + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error1; + + return false; + } + else if (((2 * error_locations.size()) - erasure_list.size()) > fec_length) + { + /* + Too many errors\erasures! 2E + S <= fec_length + L = E + S + E = L - S + 2E = 2L - 2S + 2E + S = 2L - 2S + S + = 2L - S + Where: + L : Error Locations + E : Errors + S : Erasures + + */ + + rsblock.errors_detected = error_locations.size(); + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error2; + + return false; + } + else + rsblock.errors_detected = error_locations.size(); + + return forney_algorithm(error_locations, lambda, syndrome, rsblock); + } + + private: + + decoder(); + decoder(const decoder& dec); + decoder& operator=(const decoder& dec); + + protected: + + void load_message(galois::field_polynomial& received, const block_type& rsblock) const + { + /* + Load message data into received polynomial in reverse order. + */ + + for (std::size_t i = 0; i < code_length; ++i) + { + received[code_length - 1 - i] = rsblock[i]; + } + } + + void create_lookup_tables() + { + root_exponent_table_.reserve(field_.size() + 1); + + for (int i = 0; i < static_cast(field_.size() + 1); ++i) + { + root_exponent_table_.push_back(field_.exp(field_.alpha(code_length - i),(1 - gen_initial_index_))); + } + + syndrome_exponent_table_.reserve(fec_length); + + for (int i = 0; i < static_cast(fec_length); ++i) + { + syndrome_exponent_table_.push_back(field_.alpha(gen_initial_index_ + i)); + } + + gamma_table_.reserve(field_.size() + 1); + + for (int i = 0; i < static_cast(field_.size() + 1); ++i) + { + gamma_table_.push_back((1 + (X_ * galois::field_element(field_,field_.alpha(i))))); + } + } + + void prepare_erasure_list(erasure_locations_t& erasure_locations, const erasure_locations_t& erasure_list) const + { + /* + Note: 1. Erasure positions must be unique. + 2. Erasure positions must exist within the code block. + There are NO exceptions to these rules! + */ + + erasure_locations.resize(erasure_list.size()); + + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + erasure_locations[i] = (code_length - 1 - erasure_list[i]); + } + } + + int compute_syndrome(const galois::field_polynomial& received, + galois::field_polynomial& syndrome) const + { + int error_flag = 0; + syndrome = galois::field_polynomial(field_,fec_length - 1); + + for (std::size_t i = 0; i < fec_length; ++i) + { + syndrome[i] = received(syndrome_exponent_table_[i]); + error_flag |= syndrome[i].poly(); + } + + return error_flag; + } + + void compute_gamma(galois::field_polynomial& gamma, const erasure_locations_t& erasure_locations) const + { + for (std::size_t i = 0; i < erasure_locations.size(); ++i) + { + gamma *= gamma_table_[erasure_locations[i]]; + } + } + + void find_roots(const galois::field_polynomial& poly, std::vector& root_list) const + { + /* + Chien Search: Find the roots of the error locator polynomial + via an exhaustive search over all non-zero elements in the + given finite field. + */ + + root_list.reserve(fec_length << 1); + root_list.resize(0); + + const std::size_t polynomial_degree = poly.deg(); + + for (int i = 1; i <= static_cast(code_length); ++i) + { + if (0 == poly(field_.alpha(i)).poly()) + { + root_list.push_back(i); + + if (polynomial_degree == root_list.size()) + { + break; + } + } + } + } + + void compute_discrepancy(galois::field_element& discrepancy, + const galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + const std::size_t& l, + const std::size_t& round) const + { + /* + * + Compute the lambda discrepancy at the current round of BMA + + min: if(a(l), lambda.deg()); + // does not compile under Windows and has been replaced with: + + + std::size_t bb = lambda.deg(); + std::size_t aa = static_cast(l); + + std::size_t upper_bound = 0; + if (aa < bb) + upper_bound = aa; + else + upper_bound = bb; + + discrepancy = 0; + + for (std::size_t i = 0; i <= upper_bound; ++i) + { + discrepancy += lambda[i] * syndrome[round - i]; + } + } + + void modified_berlekamp_massey_algorithm(galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + const std::size_t erasure_count) const + { + /* + Modified Berlekamp-Massey Algorithm + Identify the shortest length linear feed-back shift register (LFSR) + that will generate the sequence equivalent to the syndrome. + */ + + int i = -1; + std::size_t l = erasure_count; + + galois::field_element discrepancy(field_,0); + galois::field_polynomial previous_lambda = lambda << 1; + + for (std::size_t round = erasure_count; round < fec_length; ++round) + { + compute_discrepancy(discrepancy, lambda, syndrome, l, round); + + if (discrepancy != 0) + { + galois::field_polynomial tau = lambda - (discrepancy * previous_lambda); + + if (static_cast(l) < (static_cast(round) - i)) + { + const std::size_t tmp = round - i; + i = static_cast(round - l); + l = tmp; + previous_lambda = lambda / discrepancy; + } + + lambda = tau; + } + + previous_lambda <<= 1; + } + } + + bool forney_algorithm(const std::vector& error_locations, + const galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + block_type& rsblock) const + { + /* + The Forney algorithm for computing the error magnitudes + */ + const galois::field_polynomial omega = (lambda * syndrome) % fec_length; + const galois::field_polynomial lambda_derivative = lambda.derivative(); + + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + + for (std::size_t i = 0; i < error_locations.size(); ++i) + { + const unsigned int error_location = error_locations[i]; + const galois::field_symbol alpha_inverse = field_.alpha(error_location); + const galois::field_symbol numerator = (omega(alpha_inverse) * root_exponent_table_[error_location]).poly(); + const galois::field_symbol denominator = lambda_derivative(alpha_inverse).poly(); + + if (0 != numerator) + { + if (0 != denominator) + { + rsblock[error_location - 1] ^= field_.div(numerator, denominator); + rsblock.errors_corrected++; + } + else + { + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error3; + return false; + } + } + else + ++rsblock.zero_numerators; + } + + if (lambda.deg() == static_cast(rsblock.errors_detected)) + return true; + else + { + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error4; + return false; + } + } + + protected: + + bool decoder_valid_; + const galois::field& field_; + std::vector root_exponent_table_; + std::vector syndrome_exponent_table_; + std::vector gamma_table_; + const galois::field_polynomial X_; + const unsigned int gen_initial_index_; + }; + + template + class shortened_decoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + shortened_decoder(const galois::field& field, const unsigned int gen_initial_index = 0) + : decoder_(field, gen_initial_index) + {} + + inline bool decode(block_type& rsblock, const erasure_locations_t& erasure_list) const + { + typename natural_decoder_type::block_type block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < code_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + erasure_locations_t shifted_position_erasure_list(erasure_list.size(),0); + + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + shifted_position_erasure_list[i] = erasure_list[i] + padding_length; + } + + if (decoder_.decode(block, shifted_position_erasure_list)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + rsblock.copy_state(block); + return true; + } + else + { + rsblock.copy_state(block); + return false; + } + } + + inline bool decode(block_type& rsblock) const + { + typename natural_decoder_type::block_type block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < code_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + if (decoder_.decode(block)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + rsblock.copy_state(block); + return true; + } + else + { + rsblock.copy_state(block); + return false; + } + } + + private: + + typedef decoder natural_decoder_type; + const natural_decoder_type decoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_encoder.hpp b/hsmodem/fec/schifra_reed_solomon_encoder.hpp new file mode 100644 index 0000000..87641b8 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_encoder.hpp @@ -0,0 +1,204 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_ENCODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_ENCODER_HPP + + +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class encoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + encoder(const galois::field& gfield, const galois::field_polynomial& generator) + : encoder_valid_(code_length == gfield.size()), + field_(gfield), + generator_(generator) + {} + + ~encoder() + {} + + inline bool encode(block_type& rsblock) const + { + if (!encoder_valid_) + { + rsblock.error = block_type::e_encoder_error0; + return false; + } + + const galois::field_polynomial parities = msg_poly(rsblock) % generator_; + const galois::field_symbol mask = field_.mask(); + + if (parities.deg() == (fec_length - 1)) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + rsblock.fec(i) = parities[fec_length - 1 - i].poly() & mask; + } + } + else + { + /* + Note: Encoder should never branch here. + Possible issues to look for: + 1. Generator polynomial degree is not equivelent to fec length + 2. Field and code length are not consistent. + + */ + rsblock.error = block_type::e_encoder_error1; + return false; + } + + return true; + } + + inline bool encode(const std::string& data, block_type& rsblock) const + { + std::string::const_iterator itr = data.begin(); + const galois::field_symbol mask = field_.mask(); + + for (std::size_t i = 0; i < data_length; ++i, ++itr) + { + rsblock.data[i] = static_cast(*itr) & mask; + } + + return encode(rsblock); + } + + private: + + encoder(); + encoder(const encoder& enc); + encoder& operator=(const encoder& enc); + + inline galois::field_polynomial msg_poly(const block_type& rsblock) const + { + galois::field_polynomial message(field_, code_length); + + for (std::size_t i = fec_length; i < code_length; ++i) + { + message[i] = rsblock.data[code_length - 1 - i]; + } + + return message; + } + + const bool encoder_valid_; + const galois::field& field_; + const galois::field_polynomial generator_; + }; + + template + class shortened_encoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + typedef block short_block_t; + + shortened_encoder(const galois::field& gfield, + const galois::field_polynomial& generator) + : encoder_(gfield, generator) + {} + + inline bool encode(block_type& rsblock) const + { + short_block_t block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < data_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + if (encoder_.encode(block)) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + rsblock.fec(i) = block.fec(i); + } + + return true; + } + else + return false; + } + + inline bool encode(const std::string& data, block_type& rsblock) const + { + short_block_t block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < data_length; ++i) + { + block.data[padding_length + i] = data[i]; + } + + if (encoder_.encode(block)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + return true; + } + else + return false; + } + + private: + + const encoder encoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_file_decoder.hpp b/hsmodem/fec/schifra_reed_solomon_file_decoder.hpp new file mode 100644 index 0000000..f189868 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_file_decoder.hpp @@ -0,0 +1,171 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_DECODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_DECODER_HPP + + +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_decoder + { + public: + + typedef decoder decoder_type; + typedef typename decoder_type::block_type block_type; + + file_decoder(const decoder_type& decoder, + const std::string& input_file_name, + const std::string& output_file_name) + : current_block_index_(0) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + + if (remaining_bytes == 0) + { + std::cout << "reed_solomon::file_decoder() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + if (!in_stream) + { + std::cout << "reed_solomon::file_decoder() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + if (!out_stream) + { + std::cout << "reed_solomon::file_decoder() - Error: output file could not be created." << std::endl; + return; + } + + current_block_index_ = 0; + + while (remaining_bytes >= code_length) + { + process_complete_block(decoder,in_stream,out_stream); + remaining_bytes -= code_length; + current_block_index_++; + } + + if (remaining_bytes > 0) + { + process_partial_block(decoder,in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_complete_block(const decoder_type& decoder, + std::ifstream& in_stream, + std::ofstream& out_stream) + { + in_stream.read(&buffer_[0],static_cast(code_length)); + copy(buffer_,code_length,block_); + + if (!decoder.decode(block_)) + { + std::cout << "reed_solomon::file_decoder.process_complete_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + for (std::size_t i = 0; i < data_length; ++i) + { + buffer_[i] = static_cast(block_[i]); + } + + out_stream.write(&buffer_[0],static_cast(data_length)); + } + + inline void process_partial_block(const decoder_type& decoder, + std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t& read_amount) + { + if (read_amount <= fec_length) + { + std::cout << "reed_solomon::file_decoder.process_partial_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + in_stream.read(&buffer_[0],static_cast(read_amount)); + + for (std::size_t i = 0; i < (read_amount - fec_length); ++i) + { + block_.data[i] = static_cast(buffer_[i]); + } + + if ((read_amount - fec_length) < data_length) + { + for (std::size_t i = (read_amount - fec_length); i < data_length; ++i) + { + block_.data[i] = 0; + } + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + block_.fec(i) = static_cast(buffer_[(read_amount - fec_length) + i]); + } + + if (!decoder.decode(block_)) + { + std::cout << "reed_solomon::file_decoder.process_partial_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + for (std::size_t i = 0; i < (read_amount - fec_length); ++i) + { + buffer_[i] = static_cast(block_.data[i]); + } + + out_stream.write(&buffer_[0],static_cast(read_amount - fec_length)); + } + + block_type block_; + std::size_t current_block_index_; + char buffer_[code_length]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_file_encoder.hpp b/hsmodem/fec/schifra_reed_solomon_file_encoder.hpp new file mode 100644 index 0000000..98649ab --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_file_encoder.hpp @@ -0,0 +1,138 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_ENCODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_ENCODER_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_encoder + { + public: + + typedef encoder encoder_type; + typedef typename encoder_type::block_type block_type; + + file_encoder(const encoder_type& encoder, + const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + if (remaining_bytes == 0) + { + std::cout << "reed_solomon::file_encoder() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + if (!in_stream) + { + std::cout << "reed_solomon::file_encoder() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + if (!out_stream) + { + std::cout << "reed_solomon::file_encoder() - Error: output file could not be created." << std::endl; + return; + } + + std::memset(data_buffer_,0,sizeof(data_buffer_)); + std::memset(fec_buffer_ ,0,sizeof(fec_buffer_ )); + + while (remaining_bytes >= data_length) + { + process_block(encoder,in_stream,out_stream,data_length); + remaining_bytes -= data_length; + } + + if (remaining_bytes > 0) + { + process_block(encoder,in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(const encoder_type& encoder, + std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t& read_amount) + { + in_stream.read(&data_buffer_[0],static_cast(read_amount)); + for (std::size_t i = 0; i < read_amount; ++i) + { + block_.data[i] = (data_buffer_[i] & 0xFF); + } + + if (read_amount < data_length) + { + for (std::size_t i = read_amount; i < data_length; ++i) + { + block_.data[i] = 0x00; + } + } + + if (!encoder.encode(block_)) + { + std::cout << "reed_solomon::file_encoder.process_block() - Error during encoding of block!" << std::endl; + return; + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + fec_buffer_[i] = static_cast(block_.fec(i) & 0xFF); + } + + out_stream.write(&data_buffer_[0],static_cast(read_amount)); + out_stream.write(&fec_buffer_[0],fec_length); + } + + block_type block_; + char data_buffer_[data_length]; + char fec_buffer_[fec_length]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_file_interleaver.hpp b/hsmodem/fec/schifra_reed_solomon_file_interleaver.hpp new file mode 100644 index 0000000..54cd7b4 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_file_interleaver.hpp @@ -0,0 +1,247 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_INTERLEAVER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_INTERLEAVER_HPP + + +#include +#include + +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_interleaver + { + public: + + file_interleaver(const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + + if (0 == remaining_bytes) + { + std::cout << "reed_solomon::file_interleaver() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + + if (!in_stream) + { + std::cout << "reed_solomon::file_interleaver() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + + if (!out_stream) + { + std::cout << "reed_solomon::file_interleaver() - Error: output file could not be created." << std::endl; + return; + } + + while (remaining_bytes >= (block_length * stack_size)) + { + process_block(in_stream,out_stream); + remaining_bytes -= (block_length * stack_size); + } + + if (remaining_bytes > 0) + { + process_incomplete_block(in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(std::ifstream& in_stream, + std::ofstream& out_stream) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + interleave(block_stack_); + + for (std::size_t i = 0; i < stack_size; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + } + + inline void process_incomplete_block(std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t amount) + { + std::size_t complete_row_count = amount / block_length; + std::size_t remainder = amount % block_length; + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + in_stream.read(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + + if (remainder == 0) + interleave(block_stack_,complete_row_count); + else + interleave(block_stack_,complete_row_count + 1,remainder); + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + out_stream.write(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + } + + data_block block_stack_[stack_size]; + + }; + + template + class file_deinterleaver + { + public: + + file_deinterleaver(const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t input_file_size = schifra::fileio::file_size(input_file_name); + + if (input_file_size == 0) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + + if (!in_stream) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + + if (!out_stream) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: output file could not be created." << std::endl; + return; + } + + for (std::size_t i = 0; i < (input_file_size / (block_length * stack_size)); ++i) + { + process_block(in_stream,out_stream); + } + + if ((input_file_size % (block_length * stack_size)) != 0) + { + process_incomplete_block(in_stream,out_stream,(input_file_size % (block_length * stack_size))); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(std::ifstream& in_stream, + std::ofstream& out_stream) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + deinterleave(block_stack_); + + for (std::size_t i = 0; i < stack_size; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + } + + inline void process_incomplete_block(std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t amount) + { + std::size_t complete_row_count = amount / block_length; + std::size_t remainder = amount % block_length; + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + in_stream.read(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + + if (remainder == 0) + deinterleave(block_stack_,complete_row_count); + else + deinterleave(block_stack_,complete_row_count + 1,remainder); + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + out_stream.write(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + } + + data_block block_stack_[stack_size]; + + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_general_codec.hpp b/hsmodem/fec/schifra_reed_solomon_general_codec.hpp new file mode 100644 index 0000000..a73ee30 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_general_codec.hpp @@ -0,0 +1,210 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_GENERAL_CODEC_HPP +#define INCLUDE_SCHIFRA_REED_GENERAL_CODEC_HPP + + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + void* create_encoder(const galois::field& field, + const std::size_t& gen_poly_index) + { + const std::size_t data_length = code_length - fec_length; + traits::validate_reed_solomon_code_parameters(); + galois::field_polynomial gen_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + gen_polynomial) + ) + { + return reinterpret_cast(0); + } + + return new encoder(field,gen_polynomial); + } + + template + void* create_decoder(const galois::field& field, + const std::size_t& gen_poly_index) + { + const std::size_t data_length = code_length - fec_length; + traits::validate_reed_solomon_code_parameters(); + return new decoder(field,static_cast(gen_poly_index)); + } + + template + class general_codec + { + public: + + general_codec(const galois::field& field, + const std::size_t& gen_poly_index) + { + for (std::size_t i = 0; i < max_fec_length; ++i) + { + encoder_[i] = 0; + decoder_[i] = 0; + } + + encoder_[ 2] = create_encoder(field, gen_poly_index); + encoder_[ 4] = create_encoder(field, gen_poly_index); + encoder_[ 6] = create_encoder(field, gen_poly_index); + encoder_[ 8] = create_encoder(field, gen_poly_index); + encoder_[ 10] = create_encoder(field, gen_poly_index); + encoder_[ 12] = create_encoder(field, gen_poly_index); + encoder_[ 14] = create_encoder(field, gen_poly_index); + encoder_[ 16] = create_encoder(field, gen_poly_index); + encoder_[ 18] = create_encoder(field, gen_poly_index); + encoder_[ 20] = create_encoder(field, gen_poly_index); + encoder_[ 22] = create_encoder(field, gen_poly_index); + encoder_[ 24] = create_encoder(field, gen_poly_index); + encoder_[ 26] = create_encoder(field, gen_poly_index); + encoder_[ 28] = create_encoder(field, gen_poly_index); + encoder_[ 30] = create_encoder(field, gen_poly_index); + encoder_[ 32] = create_encoder(field, gen_poly_index); + encoder_[ 64] = create_encoder(field, gen_poly_index); + encoder_[ 80] = create_encoder(field, gen_poly_index); + encoder_[ 96] = create_encoder(field, gen_poly_index); + encoder_[128] = create_encoder(field, gen_poly_index); + + decoder_[ 2] = create_decoder(field, gen_poly_index); + decoder_[ 4] = create_decoder(field, gen_poly_index); + decoder_[ 6] = create_decoder(field, gen_poly_index); + decoder_[ 8] = create_decoder(field, gen_poly_index); + decoder_[ 10] = create_decoder(field, gen_poly_index); + decoder_[ 12] = create_decoder(field, gen_poly_index); + decoder_[ 14] = create_decoder(field, gen_poly_index); + decoder_[ 16] = create_decoder(field, gen_poly_index); + decoder_[ 18] = create_decoder(field, gen_poly_index); + decoder_[ 20] = create_decoder(field, gen_poly_index); + decoder_[ 22] = create_decoder(field, gen_poly_index); + decoder_[ 24] = create_decoder(field, gen_poly_index); + decoder_[ 26] = create_decoder(field, gen_poly_index); + decoder_[ 28] = create_decoder(field, gen_poly_index); + decoder_[ 30] = create_decoder(field, gen_poly_index); + decoder_[ 32] = create_decoder(field, gen_poly_index); + decoder_[ 64] = create_decoder(field, gen_poly_index); + decoder_[ 80] = create_decoder(field, gen_poly_index); + decoder_[ 96] = create_decoder(field, gen_poly_index); + decoder_[128] = create_decoder(field, gen_poly_index); + } + + ~general_codec() + { + delete static_cast*>(encoder_[ 2]); + delete static_cast*>(encoder_[ 4]); + delete static_cast*>(encoder_[ 6]); + delete static_cast*>(encoder_[ 8]); + delete static_cast*>(encoder_[ 10]); + delete static_cast*>(encoder_[ 12]); + delete static_cast*>(encoder_[ 14]); + delete static_cast*>(encoder_[ 16]); + delete static_cast*>(encoder_[ 18]); + delete static_cast*>(encoder_[ 20]); + delete static_cast*>(encoder_[ 22]); + delete static_cast*>(encoder_[ 24]); + delete static_cast*>(encoder_[ 26]); + delete static_cast*>(encoder_[ 28]); + delete static_cast*>(encoder_[ 30]); + delete static_cast*>(encoder_[ 32]); + delete static_cast*>(encoder_[ 64]); + delete static_cast*>(encoder_[ 80]); + delete static_cast*>(encoder_[ 96]); + delete static_cast*>(encoder_[128]); + + delete static_cast*>(decoder_[ 2]); + delete static_cast*>(decoder_[ 4]); + delete static_cast*>(decoder_[ 6]); + delete static_cast*>(decoder_[ 8]); + delete static_cast*>(decoder_[ 10]); + delete static_cast*>(decoder_[ 12]); + delete static_cast*>(decoder_[ 14]); + delete static_cast*>(decoder_[ 16]); + delete static_cast*>(decoder_[ 18]); + delete static_cast*>(decoder_[ 20]); + delete static_cast*>(decoder_[ 22]); + delete static_cast*>(decoder_[ 24]); + delete static_cast*>(decoder_[ 26]); + delete static_cast*>(decoder_[ 28]); + delete static_cast*>(decoder_[ 30]); + delete static_cast*>(decoder_[ 32]); + delete static_cast*>(decoder_[ 64]); + delete static_cast*>(decoder_[ 80]); + delete static_cast*>(decoder_[ 96]); + delete static_cast*>(decoder_[128]); + } + + template + bool encode(Block& block) const + { + /* + cl : code length + fl : fec length + */ + typedef reed_solomon::encoder encoder_type; + traits::__static_assert__<(Block::trait::fec_length <= max_fec_length)>(); + if (encoder_[Block::trait::fec_length] == 0) + return false; + else + return static_cast(encoder_[Block::trait::fec_length])->encode(block); + } + + template + bool decode(Block& block) const + { + typedef reed_solomon::decoder decoder_type; + traits::__static_assert__<(Block::trait::fec_length <= max_fec_length)>(); + if (decoder_[Block::trait::fec_length] == 0) + return false; + else + return static_cast(decoder_[Block::trait::fec_length])->decode(block); + } + + private: + + void* encoder_[max_fec_length + 1]; + void* decoder_[max_fec_length + 1]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_interleaving.hpp b/hsmodem/fec/schifra_reed_solomon_interleaving.hpp new file mode 100644 index 0000000..0f62290 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_interleaving.hpp @@ -0,0 +1,639 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_INTERLEAVING_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_INTERLEAVING_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + inline void interleave(block (&block_stack)[code_length]) + { + for (std::size_t i = 0; i < code_length; ++i) + { + for (std::size_t j = i + 1; j < code_length; ++j) + { + typename block::symbol_type tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave(block (&block_stack)[row_count]) + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < code_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void interleave(block (&block_stack)[row_count], + const std::size_t partial_code_length) + { + if (partial_code_length == code_length) + { + interleave(block_stack); + } + else + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_code_length; index < code_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < code_length - fec_length; ++index) + { + block_stack[row].data[index] = auxiliary_stack[row].data[index]; + } + for (std::size_t index = 0; index < fec_length; ++index) + { + block_stack[row].fec[index] = auxiliary_stack[row].fec[index]; + } + } + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void interleave(data_block (&block_stack)[block_length]) + { + for (std::size_t i = 0; i < block_length; ++i) + { + for (std::size_t j = i + 1; j < block_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave(data_block (&block_stack)[row_count]) + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void interleave(data_block (&block_stack)[row_count], + const std::size_t partial_block_length) + { + if (partial_block_length == block_length) + { + interleave(block_stack); + } + else + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_block_length; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void interleave(data_block block_stack[], + const std::size_t row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void interleave(data_block block_stack[], + const std::size_t row_count, + const std::size_t partial_block_length) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_block_length; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + + delete[] auxiliary_stack; + } + + template + inline void deinterleave(block (&block_stack)[row_count]) + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < code_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(block (&block_stack)[row_count], + const std::size_t partial_code_length) + { + if (partial_code_length == code_length) + { + deinterleave(block_stack); + } + else + { + block auxiliary_stack[row_count]; + + std::size_t aux_row1 = 0; + std::size_t aux_index1 = 0; + + std::size_t aux_row2 = 0; + std::size_t aux_index2 = 0; + + for (std::size_t i = 0; i < partial_code_length * row_count; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == row_count) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == code_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t i = 0; aux_index1 < code_length; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == (row_count - 1)) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == code_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < code_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void deinterleave(data_block (&block_stack)[block_length]) + { + data_block auxiliary_stack[block_length]; + + for (std::size_t row = 0; row < block_length; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[index][row] = block_stack[row][index]; + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(data_block (&block_stack)[row_count]) + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(data_block block_stack[], + const std::size_t row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void deinterleave(data_block block_stack[], + const std::size_t row_count, + const std::size_t partial_block_length) + { + if (row_count == 1) return; + + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row1 = 0; + std::size_t aux_index1 = 0; + + std::size_t aux_row2 = 0; + std::size_t aux_index2 = 0; + + for (std::size_t i = 0; i < partial_block_length * row_count; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == row_count) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == block_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t i = 0; aux_index1 < block_length; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == (row_count - 1)) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == block_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + + delete[] auxiliary_stack; + } + + template + inline void interleave_columnskip(data_block* block_stack) + { + for (std::size_t i = 0; i < block_length; ++i) + { + for (std::size_t j = i + 1; j < block_length; ++j) + { + std::size_t x1 = i + skip_columns; + std::size_t x2 = j + skip_columns; + + T tmp = block_stack[i][x2]; + block_stack[i][x2] = block_stack[j][x1]; + block_stack[j][x1] = tmp; + } + } + } + + template + inline void interleave_columnskip(data_block* block_stack, const std::size_t& row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = skip_columns; + + for (std::size_t index = skip_columns; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = skip_columns; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = skip_columns; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void interleave(T* block_stack[data_length]) + { + for (std::size_t i = 0; i < data_length; ++i) + { + for (std::size_t j = i + 1; j < data_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave_columnskip(T* block_stack[data_length]) + { + for (std::size_t i = skip_columns; i < data_length; ++i) + { + for (std::size_t j = i + 1; j < data_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_product_code.hpp b/hsmodem/fec/schifra_reed_solomon_product_code.hpp new file mode 100644 index 0000000..15f00c4 --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_product_code.hpp @@ -0,0 +1,238 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_PRODUCT_CODE_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_PRODUCT_CODE_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_reed_solomon_bitio.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + template + class square_product_code_encoder + { + public: + + typedef encoder encoder_type; + typedef block block_type; + typedef traits::reed_solomon_triat trait; + typedef unsigned char data_type; + typedef data_type* data_ptr_type; + + enum { data_size = data_length * data_length }; + enum { total_size = code_length * code_length }; + + square_product_code_encoder(const encoder_type& enc) + : encoder_(enc) + {} + + bool encode(data_ptr_type data) + { + data_ptr_type curr_data_ptr = data; + + for (std::size_t row = 0; row < data_length; ++row, curr_data_ptr += data_length) + { + copy(curr_data_ptr, data_length, block_stack_[row]); + + if (!encoder_.encode(block_stack_[row])) + { + return false; + } + } + + block_type vertical_block; + + for (std::size_t col = 0; col < code_length; ++col) + { + for (std::size_t row = 0; row < data_length; ++row) + { + vertical_block[row] = block_stack_[row][col]; + } + + if (!encoder_.encode(vertical_block)) + { + return false; + } + + for (std::size_t fec_index = 0; fec_index < fec_length; ++fec_index) + { + block_stack_[data_length + fec_index].fec(fec_index) = vertical_block.fec(fec_index); + } + } + + return true; + } + + bool encode_and_interleave(data_ptr_type data) + { + if (!encode(data)) + { + return false; + } + + interleave(block_stack_); + + return true; + } + + void output(data_ptr_type output_data) + { + for (std::size_t row = 0; row < code_length; ++row, output_data += code_length) + { + bitio::convert_symbol_to_data::size>(block_stack_[row].data,output_data,code_length); + } + } + + void clear() + { + for (std::size_t i = 0; i < code_length; ++i) + { + block_stack_[i].clear(); + } + } + + private: + + square_product_code_encoder(const square_product_code_encoder& spce); + square_product_code_encoder& operator=(const square_product_code_encoder& spce); + + block_type block_stack_[code_length]; + const encoder_type& encoder_; + }; + + template + class square_product_code_decoder + { + public: + + typedef decoder decoder_type; + typedef block block_type; + typedef traits::reed_solomon_triat trait; + typedef unsigned char data_type; + typedef data_type* data_ptr_type; + + enum { data_size = data_length * data_length }; + enum { total_size = code_length * code_length }; + + square_product_code_decoder(const decoder_type& decoder) + : decoder_(decoder) + {} + + void decode(data_ptr_type data) + { + copy_proxy(data); + decode_proxy(); + } + + void deinterleave_and_decode(data_ptr_type data) + { + copy_proxy(data); + interleave(block_stack_); + decode_proxy(); + } + + void output(data_ptr_type output_data) + { + for (std::size_t row = 0; row < data_length; ++row, output_data += data_length) + { + bitio::convert_symbol_to_data::size>(block_stack_[row].data,output_data,data_length); + } + } + + void clear() + { + for (std::size_t i = 0; i < code_length; ++i) + { + block_stack_[i].clear(); + } + } + + private: + + square_product_code_decoder(const square_product_code_decoder& spcd); + square_product_code_decoder& operator=(const square_product_code_decoder& spcd); + + void copy_proxy(data_ptr_type data) + { + for (std::size_t row = 0; row < code_length; ++row, data += code_length) + { + bitio::convert_data_to_symbol::size>(data,code_length,block_stack_[row].data); + } + } + + void decode_proxy() + { + bool first_iteration_failure = false; + + for (std::size_t row = 0; row < data_length; ++row) + { + if (!decoder_.decode(block_stack_[row])) + { + first_iteration_failure = true; + } + } + + if (!first_iteration_failure) + { + /* + Either no errors detected or all errors have + been detected and corrected. + */ + return; + } + + block_type vertical_block; + + for (std::size_t col = 0; col < code_length; ++col) + { + for (std::size_t row = 0; row < data_length; ++row) + { + vertical_block[row] = block_stack_[row][col]; + } + + decoder_.decode(vertical_block); + } + } + + block_type block_stack_[code_length]; + const decoder_type& decoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_reed_solomon_speed_evaluator.hpp b/hsmodem/fec/schifra_reed_solomon_speed_evaluator.hpp new file mode 100644 index 0000000..16ac54c --- /dev/null +++ b/hsmodem/fec/schifra_reed_solomon_speed_evaluator.hpp @@ -0,0 +1,411 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_SPPED_EVALUATOR_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_SPPED_EVALUATOR_HPP + + +#include +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_file_encoder.hpp" +#include "schifra_reed_solomon_file_decoder.hpp" +#include "schifra_error_processes.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + void create_messages(const encoder& rs_encoder, + std::vector< block >& original_block_list, + const bool full_test_set = false) + { + const std::size_t data_length = code_length - fec_length; + std::vector message_list; + if (full_test_set) + { + for (unsigned int i = 0; i < 256; ++i) + { + message_list.push_back(std::string(data_length,static_cast(i))); + } + } + else + { + message_list.push_back(std::string(data_length,static_cast(0x00))); + message_list.push_back(std::string(data_length,static_cast(0xAA))); + message_list.push_back(std::string(data_length,static_cast(0xA5))); + message_list.push_back(std::string(data_length,static_cast(0xAC))); + message_list.push_back(std::string(data_length,static_cast(0xCA))); + message_list.push_back(std::string(data_length,static_cast(0x5A))); + message_list.push_back(std::string(data_length,static_cast(0xCC))); + message_list.push_back(std::string(data_length,static_cast(0xF0))); + message_list.push_back(std::string(data_length,static_cast(0x0F))); + message_list.push_back(std::string(data_length,static_cast(0xFF))); + message_list.push_back(std::string(data_length,static_cast(0x92))); + message_list.push_back(std::string(data_length,static_cast(0x6D))); + message_list.push_back(std::string(data_length,static_cast(0x77))); + message_list.push_back(std::string(data_length,static_cast(0x7A))); + message_list.push_back(std::string(data_length,static_cast(0xA7))); + message_list.push_back(std::string(data_length,static_cast(0xE5))); + message_list.push_back(std::string(data_length,static_cast(0xEB))); + } + + std::string tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0xFF); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0xFF)) ; + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + block current_block; + rs_encoder.encode(message_list[i],current_block); + original_block_list.push_back(current_block); + } + } + + template , + typename RSDecoder = decoder, + std::size_t data_length = code_length - fec_length> + struct all_errors_decoder_speed_test + { + public: + + all_errors_decoder_speed_test(const std::size_t prim_poly_size, const unsigned int prim_poly[]) + { + galois::field field(field_descriptor,prim_poly_size,prim_poly); + galois::field_polynomial generator_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + generator_polynomial) + ) + { + return; + } + + RSEncoder rs_encoder(field,generator_polynomial); + RSDecoder rs_decoder(field,gen_poly_index); + + std::vector< block > original_block; + + create_messages(rs_encoder,original_block); + + std::vector > rs_block; + std::vector block_index_list; + + for (std::size_t block_index = 0; block_index < original_block.size(); ++block_index) + { + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block block = original_block[block_index]; + corrupt_message_all_errors(block,error_count,start_position,1); + rs_block.push_back(block); + block_index_list.push_back(block_index); + } + } + } + + const std::size_t max_iterations = 100; + std::size_t blocks_decoded = 0; + std::size_t block_failures = 0; + + schifra::utils::timer timer; + timer.start(); + + for (std::size_t j = 0; j < max_iterations; ++j) + { + for (std::size_t i = 0; i < rs_block.size(); ++i) + { + if (!rs_decoder.decode(rs_block[i])) + { + std::cout << "Decoding Failure!" << std::endl; + block_failures++; + } + else if (!are_blocks_equivelent(rs_block[i],original_block[block_index_list[i]])) + { + std::cout << "Error Correcting Failure!" << std::endl; + block_failures++; + } + else + blocks_decoded++; + } + } + + timer.stop(); + + double time = timer.time(); + double mbps = ((max_iterations * rs_block.size() * data_length) * 8.0) / (1048576.0 * time); + + print_codec_properties(); + + if (block_failures == 0) + printf("Blocks decoded: %8d Time:%8.3fsec Rate:%8.3fMbps\n", + static_cast(blocks_decoded), + time, + mbps); + else + std::cout << "Blocks decoded: " << blocks_decoded << "\tDecode Failures: " << block_failures <<"\tTime: " << time <<"sec\tRate: " << mbps << "Mbps" << std::endl; + } + + void print_codec_properties() + { + printf("[All Errors Test] Codec: RS(%03d,%03d,%03d) ", + static_cast(code_length), + static_cast(data_length), + static_cast(fec_length)); + } + }; + + template , + typename RSDecoder = decoder, + std::size_t data_length = code_length - fec_length> + struct all_erasures_decoder_speed_test + { + public: + + all_erasures_decoder_speed_test(const std::size_t prim_poly_size, const unsigned int prim_poly[]) + { + galois::field field(field_descriptor,prim_poly_size,prim_poly); + galois::field_polynomial generator_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + generator_polynomial) + ) + { + return; + } + + RSEncoder rs_encoder(field,generator_polynomial); + RSDecoder rs_decoder(field,gen_poly_index); + + std::vector< block > original_block; + + create_messages(rs_encoder,original_block); + + std::vector > rs_block; + std::vector erasure_list; + std::vector block_index_list; + + for (std::size_t block_index = 0; block_index < original_block.size(); ++block_index) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block block = original_block[block_index]; + erasure_locations_t erasures; + corrupt_message_all_erasures(block,erasures,erasure_count,start_position,1); + + if (erasure_count != erasures.size()) + { + std::cout << "all_erasures_decoder_speed_test() - Failed to properly generate erasures list. Details:"; + std::cout << "(" << block_index << "," << erasure_count << "," << start_position << ")" << std::endl; + } + + rs_block.push_back(block); + erasure_list.push_back(erasures); + block_index_list.push_back(block_index); + } + } + } + + const std::size_t max_iterations = 100; + std::size_t blocks_decoded = 0; + std::size_t block_failures = 0; + + schifra::utils::timer timer; + timer.start(); + + for (std::size_t j = 0; j < max_iterations; ++j) + { + for (std::size_t i = 0; i < rs_block.size(); ++i) + { + if (!rs_decoder.decode(rs_block[i],erasure_list[i])) + { + std::cout << "Decoding Failure!" << std::endl; + block_failures++; + } + else if (!are_blocks_equivelent(rs_block[i],original_block[block_index_list[i]])) + { + std::cout << "Error Correcting Failure!" << std::endl; + block_failures++; + } + else + blocks_decoded++; + } + } + + timer.stop(); + + double time = timer.time(); + double mbps = ((max_iterations * rs_block.size() * data_length) * 8.0) / (1048576.0 * time); + + print_codec_properties(); + + if (block_failures == 0) + printf("Blocks decoded: %8d Time:%8.3fsec Rate:%8.3fMbps\n", + static_cast(blocks_decoded), + time, + mbps); + else + std::cout << "Blocks decoded: " << blocks_decoded << "\tDecode Failures: " << block_failures <<"\tTime: " << time <<"sec\tRate: " << mbps << "Mbps" << std::endl; + } + + void print_codec_properties() + { + printf("[All Erasures Test] Codec: RS(%03d,%03d,%03d) ", + static_cast(code_length), + static_cast(data_length), + static_cast(fec_length)); + } + + }; + + void speed_test_00() + { + all_errors_decoder_speed_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 8>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 48>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + } + + void speed_test_01() + { + all_erasures_decoder_speed_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 8>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 48>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_sequential_root_generator_polynomial_creator.hpp b/hsmodem/fec/schifra_sequential_root_generator_polynomial_creator.hpp new file mode 100644 index 0000000..02c9682 --- /dev/null +++ b/hsmodem/fec/schifra_sequential_root_generator_polynomial_creator.hpp @@ -0,0 +1,64 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_SEQUENTIAL_ROOT_GENERATOR_POLYNOMIAL_CREATOR_HPP +#define INCLUDE_SCHIFRA_SEQUENTIAL_ROOT_GENERATOR_POLYNOMIAL_CREATOR_HPP + + +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" + + +namespace schifra +{ + + inline bool make_sequential_root_generator_polynomial(const galois::field& field, + const std::size_t initial_index, + const std::size_t num_elements, + galois::field_polynomial& generator_polynomial) + { + if ( + (initial_index >= field.size()) || + ((initial_index + num_elements) > field.size()) + ) + { + return false; + } + + galois::field_element alpha(field, 2); + galois::field_polynomial X = galois::generate_X(field); + generator_polynomial = galois::field_element(field, 1); + + for (std::size_t i = initial_index; i < (initial_index + num_elements); ++i) + { + generator_polynomial *= (X + (alpha ^ static_cast(i))); + } + + return true; + } + +} // namespace schifra + +#endif diff --git a/hsmodem/fec/schifra_utilities.hpp b/hsmodem/fec/schifra_utilities.hpp new file mode 100644 index 0000000..d52844d --- /dev/null +++ b/hsmodem/fec/schifra_utilities.hpp @@ -0,0 +1,198 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_UTILITES_HPP +#define INCLUDE_SCHIFRA_UTILITES_HPP + + +#include + +#if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + #include +#else + #include + #include +#endif + + +namespace schifra +{ + + namespace utils + { + + const std::size_t high_bits_in_char[256] = { + 0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 4,5,5,6,5,6,6,7,5,6,6,7,6,7,7,8 + }; + + template + inline std::size_t hamming_distance_element(const T v1, const T v2) + { + std::size_t distance = 0; + const unsigned char* it1 = reinterpret_cast(&v1); + const unsigned char* it2 = reinterpret_cast(&v2); + for (std::size_t i = 0; i < sizeof(T); ++i, ++it1, ++it2) + { + distance += high_bits_in_char[((*it1) ^ (*it2)) & 0xFF]; + } + return distance; + } + + inline std::size_t hamming_distance(const unsigned char data1[], const unsigned char data2[], const std::size_t length) + { + std::size_t distance = 0; + const unsigned char* it1 = data1; + const unsigned char* it2 = data2; + for (std::size_t i = 0; i < length; ++i, ++it1, ++it2) + { + distance += high_bits_in_char[((*it1) ^ (*it2)) & 0xFF]; + } + return distance; + } + + template + inline std::size_t hamming_distance(ForwardIterator it1_begin, ForwardIterator it2_begin, ForwardIterator it1_end) + { + std::size_t distance = 0; + ForwardIterator it1 = it1_begin; + ForwardIterator it2 = it2_begin; + for (; it1 != it1_end; ++it1, ++it2) + { + distance += hamming_distance_element(*it1,*it2); + } + return distance; + } + + class timer + { + public: + + #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + timer() + : in_use_(false) + { + QueryPerformanceFrequency(&clock_frequency_); + } + + inline void start() + { + in_use_ = true; + QueryPerformanceCounter(&start_time_); + } + + inline void stop() + { + QueryPerformanceCounter(&stop_time_); + in_use_ = false; + } + + inline double time() const + { + return (1.0 * (stop_time_.QuadPart - start_time_.QuadPart)) / (1.0 * clock_frequency_.QuadPart); + } + + #else + + timer() + : in_use_(false) + { + start_time_.tv_sec = 0; + start_time_.tv_usec = 0; + stop_time_.tv_sec = 0; + stop_time_.tv_usec = 0; + } + + inline void start() + { + in_use_ = true; + gettimeofday(&start_time_,0); + } + + inline void stop() + { + gettimeofday(&stop_time_, 0); + in_use_ = false; + } + + inline unsigned long long int usec_time() const + { + if (!in_use_) + { + if (stop_time_.tv_sec >= start_time_.tv_sec) + { + return 1000000 * (stop_time_.tv_sec - start_time_.tv_sec ) + + (stop_time_.tv_usec - start_time_.tv_usec); + } + else + return std::numeric_limits::max(); + } + else + return std::numeric_limits::max(); + } + + inline double time() const + { + return usec_time() * 0.000001; + } + + #endif + + inline bool in_use() const + { + return in_use_; + } + + private: + + bool in_use_; + + #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + LARGE_INTEGER start_time_; + LARGE_INTEGER stop_time_; + LARGE_INTEGER clock_frequency_; + #else + struct timeval start_time_; + struct timeval stop_time_; + #endif + }; + + } // namespace utils + +} // namespace schifra + + +#endif diff --git a/hsmodem/fft.cpp b/hsmodem/fft.cpp new file mode 100755 index 0000000..1d6e2c5 --- /dev/null +++ b/hsmodem/fft.cpp @@ -0,0 +1,126 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "hsmodem.h" +#ifdef _WIN32_ +#include "fftw_lib/fftw3.h" +#endif +#ifdef _LINUX_ +#include +#endif + +#define FFT_AUDIOSAMPLERATE 8000 + +double *din = NULL; // input data for fft +fftw_complex *cpout = NULL; // ouput data from fft +fftw_plan plan = NULL; +#define fft_rate (FFT_AUDIOSAMPLERATE / 10) // resolution: 10 Hz +int fftidx = 0; +int fftcnt = fft_rate/2+1; // number of output values +uint16_t fftout[FFT_AUDIOSAMPLERATE / 10/2+1]; +int downsamp = 0; +int downphase = 0; + +uint16_t *make_waterfall(float fre, int *retlen) +{ + // Downsampling: + // needed 8000 bit/s + // caprate 48k: downsample by 6 + // caprate 44,1k: downsample by 5,5 + + if (caprate == 48000) + { + if (++downsamp < 6) return NULL; + } + if (caprate == 44100) + { + if (downphase <= 1100) + { + if (++downsamp < 5) return NULL; + } + else + { + if (++downsamp < 6) return NULL; + } + if(++downphase >= 2000) downphase = 0; + } + downsamp = 0; + + int fftrdy = 0; + + // fre are the float samples + // fill into the fft input buffer + din[fftidx++] = fre; + + if(fftidx == fft_rate) + { + fftidx = 0; + + // the fft buffer is full, execute the FFT + fftw_execute(plan); + + for (int j = 0; j < fftcnt; j++) + { + // calculate absolute value (magnitute without phase) + float fre = (float)cpout[j][0]; + float fim = (float)cpout[j][1]; + float mag = sqrt((fre * fre) + (fim * fim)); + + fftout[j] = (uint16_t)mag; + + fftrdy = 1; + } + } + + if(fftrdy == 1) + { + *retlen = fftcnt; + return fftout; + } + + return NULL; +} + +void init_fft() +{ +char fn[300]; + + sprintf(fn, "capture_fft_%d", fft_rate); // wisdom file for each capture rate + + fftw_import_wisdom_from_filename(fn); + + din = (double *)fftw_malloc(sizeof(double) * fft_rate); + cpout = (fftw_complex *)fftw_malloc(sizeof(fftw_complex) * fft_rate); + + plan = fftw_plan_dft_r2c_1d(fft_rate, din, cpout, FFTW_MEASURE); + + fftw_export_wisdom_to_filename(fn); +} + +void exit_fft() +{ + if(plan) fftw_destroy_plan(plan); + if(din) fftw_free(din); + if(cpout) fftw_free(cpout); +} diff --git a/hsmodem/fftw_lib/fftw3.h b/hsmodem/fftw_lib/fftw3.h new file mode 100644 index 0000000..76fd817 --- /dev/null +++ b/hsmodem/fftw_lib/fftw3.h @@ -0,0 +1,415 @@ +/* + * Copyright (c) 2003, 2007-14 Matteo Frigo + * Copyright (c) 2003, 2007-14 Massachusetts Institute of Technology + * + * The following statement of license applies *only* to this header file, + * and *not* to the other files distributed with FFTW or derived therefrom: + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS + * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE + * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +/***************************** NOTE TO USERS ********************************* + * + * THIS IS A HEADER FILE, NOT A MANUAL + * + * If you want to know how to use FFTW, please read the manual, + * online at http://www.fftw.org/doc/ and also included with FFTW. + * For a quick start, see the manual's tutorial section. + * + * (Reading header files to learn how to use a library is a habit + * stemming from code lacking a proper manual. Arguably, it's a + * *bad* habit in most cases, because header files can contain + * interfaces that are not part of the public, stable API.) + * + ****************************************************************************/ + +#ifndef FFTW3_H +#define FFTW3_H + +#include + +#ifdef __cplusplus +extern "C" +{ +#endif /* __cplusplus */ + +/* If is included, use the C99 complex type. Otherwise + define a type bit-compatible with C99 complex */ +#if !defined(FFTW_NO_Complex) && defined(_Complex_I) && defined(complex) && defined(I) +# define FFTW_DEFINE_COMPLEX(R, C) typedef R _Complex C +#else +# define FFTW_DEFINE_COMPLEX(R, C) typedef R C[2] +#endif + +#define FFTW_CONCAT(prefix, name) prefix ## name +#define FFTW_MANGLE_DOUBLE(name) FFTW_CONCAT(fftw_, name) +#define FFTW_MANGLE_FLOAT(name) FFTW_CONCAT(fftwf_, name) +#define FFTW_MANGLE_LONG_DOUBLE(name) FFTW_CONCAT(fftwl_, name) +#define FFTW_MANGLE_QUAD(name) FFTW_CONCAT(fftwq_, name) + +/* IMPORTANT: for Windows compilers, you should add a line +*/ +#define FFTW_DLL +/* + here and in kernel/ifftw.h if you are compiling/using FFTW as a + DLL, in order to do the proper importing/exporting, or + alternatively compile with -DFFTW_DLL or the equivalent + command-line flag. This is not necessary under MinGW/Cygwin, where + libtool does the imports/exports automatically. */ +#if defined(FFTW_DLL) && (defined(_WIN32) || defined(__WIN32__)) + /* annoying Windows syntax for shared-library declarations */ +# if defined(COMPILING_FFTW) /* defined in api.h when compiling FFTW */ +# define FFTW_EXTERN extern __declspec(dllexport) +# else /* user is calling FFTW; import symbol */ +# define FFTW_EXTERN extern __declspec(dllimport) +# endif +#else +# define FFTW_EXTERN extern +#endif + +enum fftw_r2r_kind_do_not_use_me { + FFTW_R2HC=0, FFTW_HC2R=1, FFTW_DHT=2, + FFTW_REDFT00=3, FFTW_REDFT01=4, FFTW_REDFT10=5, FFTW_REDFT11=6, + FFTW_RODFT00=7, FFTW_RODFT01=8, FFTW_RODFT10=9, FFTW_RODFT11=10 +}; + +struct fftw_iodim_do_not_use_me { + int n; /* dimension size */ + int is; /* input stride */ + int os; /* output stride */ +}; + +#include /* for ptrdiff_t */ +struct fftw_iodim64_do_not_use_me { + ptrdiff_t n; /* dimension size */ + ptrdiff_t is; /* input stride */ + ptrdiff_t os; /* output stride */ +}; + +typedef void (*fftw_write_char_func_do_not_use_me)(char c, void *); +typedef int (*fftw_read_char_func_do_not_use_me)(void *); + +/* + huge second-order macro that defines prototypes for all API + functions. We expand this macro for each supported precision + + X: name-mangling macro + R: real data type + C: complex data type +*/ + +#define FFTW_DEFINE_API(X, R, C) \ + \ +FFTW_DEFINE_COMPLEX(R, C); \ + \ +typedef struct X(plan_s) *X(plan); \ + \ +typedef struct fftw_iodim_do_not_use_me X(iodim); \ +typedef struct fftw_iodim64_do_not_use_me X(iodim64); \ + \ +typedef enum fftw_r2r_kind_do_not_use_me X(r2r_kind); \ + \ +typedef fftw_write_char_func_do_not_use_me X(write_char_func); \ +typedef fftw_read_char_func_do_not_use_me X(read_char_func); \ + \ +FFTW_EXTERN void X(execute)(const X(plan) p); \ + \ +FFTW_EXTERN X(plan) X(plan_dft)(int rank, const int *n, \ + C *in, C *out, int sign, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_dft_1d)(int n, C *in, C *out, int sign, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_2d)(int n0, int n1, \ + C *in, C *out, int sign, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_3d)(int n0, int n1, int n2, \ + C *in, C *out, int sign, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_many_dft)(int rank, const int *n, \ + int howmany, \ + C *in, const int *inembed, \ + int istride, int idist, \ + C *out, const int *onembed, \ + int ostride, int odist, \ + int sign, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru_dft)(int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + C *in, C *out, \ + int sign, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru_split_dft)(int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + R *ri, R *ii, R *ro, R *io, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru64_dft)(int rank, \ + const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + C *in, C *out, \ + int sign, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru64_split_dft)(int rank, \ + const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + R *ri, R *ii, R *ro, R *io, \ + unsigned flags); \ + \ +FFTW_EXTERN void X(execute_dft)(const X(plan) p, C *in, C *out); \ +FFTW_EXTERN void X(execute_split_dft)(const X(plan) p, R *ri, R *ii, \ + R *ro, R *io); \ + \ +FFTW_EXTERN X(plan) X(plan_many_dft_r2c)(int rank, const int *n, \ + int howmany, \ + R *in, const int *inembed, \ + int istride, int idist, \ + C *out, const int *onembed, \ + int ostride, int odist, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_dft_r2c)(int rank, const int *n, \ + R *in, C *out, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_dft_r2c_1d)(int n,R *in,C *out,unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_r2c_2d)(int n0, int n1, \ + R *in, C *out, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_r2c_3d)(int n0, int n1, \ + int n2, \ + R *in, C *out, unsigned flags); \ + \ + \ +FFTW_EXTERN X(plan) X(plan_many_dft_c2r)(int rank, const int *n, \ + int howmany, \ + C *in, const int *inembed, \ + int istride, int idist, \ + R *out, const int *onembed, \ + int ostride, int odist, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_dft_c2r)(int rank, const int *n, \ + C *in, R *out, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_dft_c2r_1d)(int n,C *in,R *out,unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_c2r_2d)(int n0, int n1, \ + C *in, R *out, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_dft_c2r_3d)(int n0, int n1, \ + int n2, \ + C *in, R *out, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru_dft_r2c)(int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + R *in, C *out, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru_dft_c2r)(int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + C *in, R *out, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru_split_dft_r2c)( \ + int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + R *in, R *ro, R *io, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru_split_dft_c2r)( \ + int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + R *ri, R *ii, R *out, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru64_dft_r2c)(int rank, \ + const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + R *in, C *out, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru64_dft_c2r)(int rank, \ + const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + C *in, R *out, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru64_split_dft_r2c)( \ + int rank, const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + R *in, R *ro, R *io, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_guru64_split_dft_c2r)( \ + int rank, const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + R *ri, R *ii, R *out, \ + unsigned flags); \ + \ +FFTW_EXTERN void X(execute_dft_r2c)(const X(plan) p, R *in, C *out); \ +FFTW_EXTERN void X(execute_dft_c2r)(const X(plan) p, C *in, R *out); \ + \ +FFTW_EXTERN void X(execute_split_dft_r2c)(const X(plan) p, \ + R *in, R *ro, R *io); \ +FFTW_EXTERN void X(execute_split_dft_c2r)(const X(plan) p, \ + R *ri, R *ii, R *out); \ + \ +FFTW_EXTERN X(plan) X(plan_many_r2r)(int rank, const int *n, \ + int howmany, \ + R *in, const int *inembed, \ + int istride, int idist, \ + R *out, const int *onembed, \ + int ostride, int odist, \ + const X(r2r_kind) *kind, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_r2r)(int rank, const int *n, R *in, R *out, \ + const X(r2r_kind) *kind, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_r2r_1d)(int n, R *in, R *out, \ + X(r2r_kind) kind, unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_r2r_2d)(int n0, int n1, R *in, R *out, \ + X(r2r_kind) kind0, X(r2r_kind) kind1, \ + unsigned flags); \ +FFTW_EXTERN X(plan) X(plan_r2r_3d)(int n0, int n1, int n2, \ + R *in, R *out, X(r2r_kind) kind0, \ + X(r2r_kind) kind1, X(r2r_kind) kind2, \ + unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru_r2r)(int rank, const X(iodim) *dims, \ + int howmany_rank, \ + const X(iodim) *howmany_dims, \ + R *in, R *out, \ + const X(r2r_kind) *kind, unsigned flags); \ + \ +FFTW_EXTERN X(plan) X(plan_guru64_r2r)(int rank, const X(iodim64) *dims, \ + int howmany_rank, \ + const X(iodim64) *howmany_dims, \ + R *in, R *out, \ + const X(r2r_kind) *kind, unsigned flags); \ + \ +FFTW_EXTERN void X(execute_r2r)(const X(plan) p, R *in, R *out); \ + \ +FFTW_EXTERN void X(destroy_plan)(X(plan) p); \ +FFTW_EXTERN void X(forget_wisdom)(void); \ +FFTW_EXTERN void X(cleanup)(void); \ + \ +FFTW_EXTERN void X(set_timelimit)(double t); \ + \ +FFTW_EXTERN void X(plan_with_nthreads)(int nthreads); \ +FFTW_EXTERN int X(init_threads)(void); \ +FFTW_EXTERN void X(cleanup_threads)(void); \ +FFTW_EXTERN void X(make_planner_thread_safe)(void); \ + \ +FFTW_EXTERN int X(export_wisdom_to_filename)(const char *filename); \ +FFTW_EXTERN void X(export_wisdom_to_file)(FILE *output_file); \ +FFTW_EXTERN char *X(export_wisdom_to_string)(void); \ +FFTW_EXTERN void X(export_wisdom)(X(write_char_func) write_char, \ + void *data); \ +FFTW_EXTERN int X(import_system_wisdom)(void); \ +FFTW_EXTERN int X(import_wisdom_from_filename)(const char *filename); \ +FFTW_EXTERN int X(import_wisdom_from_file)(FILE *input_file); \ +FFTW_EXTERN int X(import_wisdom_from_string)(const char *input_string); \ +FFTW_EXTERN int X(import_wisdom)(X(read_char_func) read_char, void *data); \ + \ +FFTW_EXTERN void X(fprint_plan)(const X(plan) p, FILE *output_file); \ +FFTW_EXTERN void X(print_plan)(const X(plan) p); \ +FFTW_EXTERN char *X(sprint_plan)(const X(plan) p); \ + \ +FFTW_EXTERN void *X(malloc)(size_t n); \ +FFTW_EXTERN R *X(alloc_real)(size_t n); \ +FFTW_EXTERN C *X(alloc_complex)(size_t n); \ +FFTW_EXTERN void X(free)(void *p); \ + \ +FFTW_EXTERN void X(flops)(const X(plan) p, \ + double *add, double *mul, double *fmas); \ +FFTW_EXTERN double X(estimate_cost)(const X(plan) p); \ +FFTW_EXTERN double X(cost)(const X(plan) p); \ + \ +FFTW_EXTERN int X(alignment_of)(R *p); \ +FFTW_EXTERN const char X(version)[]; \ +FFTW_EXTERN const char X(cc)[]; \ +FFTW_EXTERN const char X(codelet_optim)[]; + + +/* end of FFTW_DEFINE_API macro */ + +FFTW_DEFINE_API(FFTW_MANGLE_DOUBLE, double, fftw_complex) +FFTW_DEFINE_API(FFTW_MANGLE_FLOAT, float, fftwf_complex) +FFTW_DEFINE_API(FFTW_MANGLE_LONG_DOUBLE, long double, fftwl_complex) + +/* __float128 (quad precision) is a gcc extension on i386, x86_64, and ia64 + for gcc >= 4.6 (compiled in FFTW with --enable-quad-precision) */ +#if (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) \ + && !(defined(__ICC) || defined(__INTEL_COMPILER) || defined(__CUDACC__) || defined(__PGI)) \ + && (defined(__i386__) || defined(__x86_64__) || defined(__ia64__)) +# if !defined(FFTW_NO_Complex) && defined(_Complex_I) && defined(complex) && defined(I) +/* note: __float128 is a typedef, which is not supported with the _Complex + keyword in gcc, so instead we use this ugly __attribute__ version. + However, we can't simply pass the __attribute__ version to + FFTW_DEFINE_API because the __attribute__ confuses gcc in pointer + types. Hence redefining FFTW_DEFINE_COMPLEX. Ugh. */ +# undef FFTW_DEFINE_COMPLEX +# define FFTW_DEFINE_COMPLEX(R, C) typedef _Complex float __attribute__((mode(TC))) C +# endif +FFTW_DEFINE_API(FFTW_MANGLE_QUAD, __float128, fftwq_complex) +#endif + +#define FFTW_FORWARD (-1) +#define FFTW_BACKWARD (+1) + +#define FFTW_NO_TIMELIMIT (-1.0) + +/* documented flags */ +#define FFTW_MEASURE (0U) +#define FFTW_DESTROY_INPUT (1U << 0) +#define FFTW_UNALIGNED (1U << 1) +#define FFTW_CONSERVE_MEMORY (1U << 2) +#define FFTW_EXHAUSTIVE (1U << 3) /* NO_EXHAUSTIVE is default */ +#define FFTW_PRESERVE_INPUT (1U << 4) /* cancels FFTW_DESTROY_INPUT */ +#define FFTW_PATIENT (1U << 5) /* IMPATIENT is default */ +#define FFTW_ESTIMATE (1U << 6) +#define FFTW_WISDOM_ONLY (1U << 21) + +/* undocumented beyond-guru flags */ +#define FFTW_ESTIMATE_PATIENT (1U << 7) +#define FFTW_BELIEVE_PCOST (1U << 8) +#define FFTW_NO_DFT_R2HC (1U << 9) +#define FFTW_NO_NONTHREADED (1U << 10) +#define FFTW_NO_BUFFERING (1U << 11) +#define FFTW_NO_INDIRECT_OP (1U << 12) +#define FFTW_ALLOW_LARGE_GENERIC (1U << 13) /* NO_LARGE_GENERIC is default */ +#define FFTW_NO_RANK_SPLITS (1U << 14) +#define FFTW_NO_VRANK_SPLITS (1U << 15) +#define FFTW_NO_VRECURSE (1U << 16) +#define FFTW_NO_SIMD (1U << 17) +#define FFTW_NO_SLOW (1U << 18) +#define FFTW_NO_FIXED_RADIX_LARGE_N (1U << 19) +#define FFTW_ALLOW_PRUNING (1U << 20) + +#ifdef __cplusplus +} /* extern "C" */ +#endif /* __cplusplus */ + +#endif /* FFTW3_H */ diff --git a/hsmodem/fftw_lib/libfftw3-3.lib b/hsmodem/fftw_lib/libfftw3-3.lib new file mode 100755 index 0000000..add143b Binary files /dev/null and b/hsmodem/fftw_lib/libfftw3-3.lib differ diff --git a/hsmodem/frame_packer.cpp b/hsmodem/frame_packer.cpp new file mode 100755 index 0000000..1f93f76 --- /dev/null +++ b/hsmodem/frame_packer.cpp @@ -0,0 +1,320 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "hsmodem.h" + +void Insert(uint8_t bit); +uint8_t* getPayload(uint8_t* rxb); + +uint8_t rxbuffer[UDPBLOCKLEN*8/2+100]; // 3...bits per symbol QPSK, enough space also for QPSK and 8PSK, +100 ... reserve, just to be sure +uint8_t rx_status = 0; + +int framecounter = 0; +int lastframenum = 0; + +// header for TX, +uint8_t TXheaderbytes[HEADERLEN] = {0x53, 0xe1, 0xa6}; +// corresponds to these QPSK symbols: +// bits: 01010011 11100001 10100110 +// QPSK: +// syms: 1 1 0 3 3 2 0 1 2 2 1 2 +// 8PSK: +// syms: 2 4 7 6 0 6 4 6 + +// QPSK +// each header has 12 symbols +// we have 4 constellations +uint8_t QPSK_headertab[4][HEADERLEN*8/2]; + +// 8PSK +// each header has 8 symbols +// we have 8 constellations +uint8_t _8PSK_headertab[8][HEADERLEN*8/3]; + +/* +8CONST: . Len 8: 02 04 07 06 00 06 04 06 +8CONST: . Len 8: 03 05 06 02 00 02 05 02 +8CONST: . Len 8: 01 07 02 03 00 03 07 03 +8CONST: . Len 8: 04 06 03 01 00 01 06 01 +8CONST: . Len 8: 05 02 01 04 00 04 02 04 +8CONST: . Len 8: 07 03 04 05 00 05 03 05 +8CONST: . Len 8: 06 01 05 07 00 07 01 07 +8CONST: . Len 8: 02 04 07 06 00 06 04 06 + */ + +// init header tables +void init_packer() +{ + // create the QPSK symbol table for the HEADER + // in all possible rotations + convertBytesToSyms_QPSK(TXheaderbytes, QPSK_headertab[0], 3); + for(int i=1; i<4; i++) + rotateQPSKsyms(QPSK_headertab[i-1], QPSK_headertab[i], 12); + + // create the 8PSK symbol table for the HEADER + // in all possible rotations + convertBytesToSyms_8PSK(TXheaderbytes, _8PSK_headertab[0], 3); + for(int i=1; i<8; i++) + { + rotate8APSKsyms(_8PSK_headertab[i-1], _8PSK_headertab[i], 8); + } + + for(int i=0; i<8; i++) + showbytestring((char*)"8CONST: ",_8PSK_headertab[i],8); +} + +// packs a payload into an udp data block +// the payload has a size of PAYLOADLEN +// type ... inserted in the "frame type information" field +// status ... specifies first/last frame of a data stream +uint8_t *Pack(uint8_t *payload, int type, int status, int *plen) +{ + FRAME frame; // raw frame without fec + + // polulate the raw frame + + // make the frame counter + if(status & (1<<4)) + framecounter = 0; // first block of a stream + else + framecounter++; + + // insert frame counter and status bits + frame.counter_LSB = framecounter & 0xff; + int framecnt_MSB = (framecounter >> 8) & 0x03; // Bit 8+9 of framecounter + frame.status = framecnt_MSB << 6; + frame.status += ((status & 0x03)<<4); + frame.status += (type & 0x0f); + + // insert the payload + memcpy(frame.payload, payload, PAYLOADLEN); + + // calculate and insert the CRC16 + uint16_t crc16 = Crc16_messagecalc(CRC16TX,(uint8_t *)(&frame), CRCSECUREDLEN); + frame.crc16_MSB = (uint8_t)(crc16 >> 8); + frame.crc16_LSB = (uint8_t)(crc16 & 0xff); + + // make the final arry for transmission + static uint8_t txblock[UDPBLOCKLEN]; + + // calculate the fec and insert into txblock (leave space for the header) + GetFEC((uint8_t *)(&frame), DATABLOCKLEN, txblock+HEADERLEN); + + // scramble + TX_Scramble(txblock+HEADERLEN, FECBLOCKLEN); // scramble all data + + // insert the header + memcpy(txblock,TXheaderbytes,HEADERLEN); + + *plen = UDPBLOCKLEN; + return txblock; +} + + +#define MAXHEADERRS 0 + +/* + * Header erros will not cause any data errors because the CRC will filter out + * false header detects, + * but it will cause higher CPU load due to excessive execution of FEC and CRC +*/ +int seekHeadersyms() +{ + if(constellationSize == 4) + { + // QPSK + for(int tab=0; tab<4; tab++) + { + int errs = 0; + for(int i=0; i>6; // frame counter MSB + framenumrx <<= 8; + framenumrx += frame.counter_LSB; // frame counter LSB + + if (lastframenum != framenumrx) rx_status |= 4; + lastframenum = framenumrx; + if (++lastframenum >= 1024) lastframenum = 0; // 1024 = 2^10 (10 bit frame number) + + // extract information and build the string for the application + // we have 10 Management Byte then the payload follows + static uint8_t payload[PAYLOADLEN+10]; + payload[0] = frame.status & 0x0f; // frame type + payload[1] = (frame.status & 0xc0)>>6; // frame counter MSB + payload[2] = frame.counter_LSB; // frame counter LSB + payload[3] = (frame.status & 0x30)>>4; // first/last frame marker + payload[4] = rx_status; // frame lost information + payload[5] = speed >> 8; // measured line speed + payload[6] = speed; + payload[7] = 0; // free for later use + payload[8] = 0; + payload[9] = 0; + + //printf("Frame no.: %d, type:%d, minfo:%d\n",framenumrx,payload[0],payload[3]); + + memcpy(payload+10,frame.payload,PAYLOADLEN); + + return payload; +} diff --git a/hsmodem/frameformat.h b/hsmodem/frameformat.h new file mode 100644 index 0000000..83bf06c --- /dev/null +++ b/hsmodem/frameformat.h @@ -0,0 +1,87 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +/* + * The total length of the FEC-secured part is 255, + * this is a requirement of the Shifra FEC routine, which + * is the best FEC that I have seen so far, highly recommended +*/ + +// total "on the air" frame size +// the total length must be a multiple of 2 and 3, so QPSK and 8PSK symbols fit into full bytes +// this is the case with a total length of 258 +#define HEADERLEN 3 +#define FECBLOCKLEN 255 +#define UDPBLOCKLEN (HEADERLEN + FECBLOCKLEN) + +/* !!! IMPORTANT for GNU RADIO !!! + * the UDP payload size for TX MUST be exactly UDPBLOCKLEN (258 in this case) or + * the transmitter will not align bits to symbols correctly ! + * + * RX payload size is not that important. But the currect size for + * QPSK is UDPBLOCKLEN*8/2 = 1032 and for 8PSK UDPBLOCKLEN*8/3 = 688 + * so we can use 344 which are 2 blocks for 8PSK and 3 blocks for QPSK + * */ + +// size of the elements inside an FECblock +// sum must be 255 +#define FECLEN 32 // supported: 16,32,64,128 +#define STATUSLEN 2 +#define CRCLEN 2 +#define PAYLOADLEN (FECBLOCKLEN - FECLEN - CRCLEN - STATUSLEN) +#define CRCSECUREDLEN (PAYLOADLEN + STATUSLEN) +#define DATABLOCKLEN (PAYLOADLEN + CRCLEN + STATUSLEN) + + +// the header is not FEC secured therefore we give some room for bit +// errors. Only 24 out of the 32 bits must be correct for +// a valid frame detection +extern uint8_t header[HEADERLEN]; + +typedef struct { + // the total size of the following data must be 255 - 32 = 223 bytes + // the FEC is calculated on FRAME with a length of 223 and returns + // a data block with length 255. + + // we use a 10 bits frame counter -> 1024 values + // so we can transmit a data block with a maximum + // size of 255 * 1024 = 261kByte. With the maximum modem speed + // this would be a transmission time of 5,8 minutes which + // is more then enough for a single data block + uint8_t counter_LSB; // lower 8 bits of the frame counter + + // the status byte contains these information: + // bit 0..3 : 4 bit (16 values) frame type information + // bit 4 : first frame of a block if "1" + // bit 5 : last frame of a block if "1" + // bit 6..7 : MSB of the frame counter + uint8_t status; + + // payload + uint8_t payload[PAYLOADLEN]; + + // CRC16 + uint8_t crc16_MSB; + uint8_t crc16_LSB; +} FRAME; diff --git a/hsmodem/hsmodem.cpp b/hsmodem/hsmodem.cpp new file mode 100755 index 0000000..4a86bca --- /dev/null +++ b/hsmodem/hsmodem.cpp @@ -0,0 +1,439 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* made for: AMSAT-DL +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +/* +* this is a console program +* it can be compiled under Linux: make +* and under Windows: Visual-Studio +* +* 3rd party libraries: +* 1) BASS Audio from https://www.un4seen.com/ + copy bass.h and bass.lib into source directory + Windows: copy bass.dll into executable directory + Linux: copy libbass.so into shared-lib folder, usually /usr/local/lib + ! NOTE: for PC-Linux and ARM-Linux you need different libraries ! + +2) liquid-DSP + Linux Install Script: + this installs it from source + + sudo apt install git autoconf libsndfile-dev libasound-dev + git clone git://github.com/jgaeddert/liquid-dsp.git + cd liquid-dsp + ./bootstrap.sh + ./configure + make -j 8 + sudo make install + sudo ldconfig + + a working copy of the source code is in ../3rdParty/liquid-dsp + to use this source simply remove the "git clone" line from above script + it installs libliquid.so into /usr/local/lib (Ubuntu) and + liquid.h into /usr/local/include/liquid/ + + Windows: + ready libraries are in ../3rdParty/liquid-dsp-windows + copy liquid.h and liquid.lib into source directory + copy liquid.dll into executable directory +*/ + + +#include "hsmodem.h" + +void toGR_sendData(uint8_t* data, int type, int status); +void bc_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock); +void appdata_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock); +void startModem(); + +// threads will exit if set to 0 +int keeprunning = 1; + +// UDP I/O +int BC_sock_AppToModem = -1; +int DATA_sock_AppToModem = -1; +int DATA_sock_from_GR = -1; +int DATA_sock_FFT_from_GR = -1; +int DATA_sock_I_Q_from_GR = -1; + +int UdpBCport_AppToModem = 40131; +int UdpDataPort_AppToModem = 40132; +int UdpDataPort_ModemToApp = 40133; + +int UdpDataPort_toGR = 40134; +int UdpDataPort_fromGR = 40135; +int UdpDataPort_fromGR_FFT = 40136; +int UdpDataPort_fromGR_I_Q = 40137; + +// op mode depending values +// default mode if not set by the app +int speedmode = 7; +int bitsPerSymbol = 2; // QPSK=2, 8PSK=3 +int constellationSize = 4; // QPSK=4, 8PSK=8 + +char localIP[] = { "127.0.0.1" }; +char ownfilename[] = { "hsmodem" }; +char appIP[20] = { 0 }; +int fixappIP = 0; +int restart_modems = 0; + +int caprate = 44100; +int txinterpolfactor = 20; +int rxPreInterpolfactor = 5; + +int captureDeviceNo = -1; +int playbackDeviceNo = -1; + +int main(int argc, char* argv[]) +{ + int opt = 0; + char* modemip = NULL; + +#ifdef _LINUX_ + while ((opt = getopt(argc, argv, "m:")) != -1) + { + switch (opt) + { + case 'm': + // specify IP of application: hsmodem -m 192.168.0.1 + modemip = optarg; + memset(appIP, 0, 20); + int len = strlen(modemip); + if (len < 16) + { + memcpy(appIP, modemip, len); + fixappIP = 1; + printf("Application IP set to: %s\n", modemip); + } + else + { + printf("invalid Application IP: %s\n", modemip); + exit(0); + } + break; + } + } + + if (isRunning(ownfilename) == 1) + exit(0); + + install_signal_handler(); +#endif + +#ifdef _WIN32_ + if (argc != 1 && argc != 3) + { + printf("invalid argument\n"); + exit(0); + } + if (argc == 3) + { + memset(appIP, 0, 20); + int len = strlen(argv[2]); + if (len < 16) + { + memcpy(appIP, argv[2], len); + fixappIP = 1; + printf("Application IP set to: %s\n", argv[2]); + } + else + { + printf("invalid Application IP: %s\n", modemip); + exit(0); + } + } +#endif + init_packer(); + + initFEC(); + init_fft(); + int ar = init_audio(playbackDeviceNo, captureDeviceNo); + if (ar == -1) + { + keeprunning = 0; + exit(0); + } + + // start udp RX to listen for broadcast search message from Application + UdpRxInit(&BC_sock_AppToModem, UdpBCport_AppToModem, &bc_rxdata, &keeprunning); + + // start udp RX for data from application + UdpRxInit(&DATA_sock_AppToModem, UdpDataPort_AppToModem, &appdata_rxdata, &keeprunning); + + // start udp RX to listen for data from GR Receiver + UdpRxInit(&DATA_sock_from_GR, UdpDataPort_fromGR, &GRdata_rxdata, &keeprunning); + + printf("QO100modem initialised and running\n"); + + while (keeprunning) + { + if (restart_modems == 1) + { + startModem(); + restart_modems = 0; + } + + //doArraySend(); + + if (demodulator() == 0) + sleep_ms(100); + } + printf("stopped: %d\n", keeprunning); + +#ifdef _LINUX_ + close(BC_sock_AppToModem); +#endif +#ifdef _WIN32_ + closesocket(BC_sock_AppToModem); +#endif + + + return 0; +} + + +typedef struct { + int audio; + int tx; + int rx; + int bpsym; +} SPEEDRATE; + +SPEEDRATE sr[10] = { + // QPSK modes + {48000, 32, 8, 2}, // AudioRate, TX-Resampler, RX-Resampler/4, bit/symbol + {44100, 28, 7, 2}, // see samprate.ods + {44100, 24, 6, 2}, + {48000, 24, 6, 2}, + {44100, 20, 5, 2}, + {48000, 20, 5, 2}, + + // 8PSK modes + {44100, 24, 6, 3}, + {48000, 24, 6, 3}, + {44100, 20, 5, 3}, + {48000, 20, 5, 3} +}; + +void startModem() +{ + bitsPerSymbol = sr[speedmode].bpsym; + constellationSize = (1 << bitsPerSymbol); // QPSK=4, 8PSK=8 + + caprate = sr[speedmode].audio; + txinterpolfactor = sr[speedmode].tx; + rxPreInterpolfactor = sr[speedmode].rx; + + // int TX audio and modulator + close_dsp(); + init_audio(playbackDeviceNo, captureDeviceNo); + init_dsp(); +} + +void setAudioDevices(int pb, int cap) +{ + //printf("%d %d\n", pb, cap); + + if (pb != playbackDeviceNo || cap != captureDeviceNo) + { + restart_modems = 1; + playbackDeviceNo = pb; + captureDeviceNo = cap; + } +} + +// called from UDP RX thread for Broadcast-search from App +void bc_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock) +{ + if (len > 0 && pdata[0] == 0x3c) + { + setAudioDevices(pdata[1], pdata[2]); + + char rxip[20]; + strcpy(rxip, inet_ntoa(rxsock->sin_addr)); + + if (fixappIP == 0) + { + if (strcmp(appIP, rxip)) + { + printf("new app IP: %s, restarting modems\n", rxip); + restart_modems = 1; + } + strcpy(appIP, rxip); + //printf("app (%s) is searching modem. Sending modem IP to the app\n",appIP); + // App searches for the modem IP, mirror the received messages + // so the app gets an UDP message with this local IP + int alen; + uint8_t* txdata = getAudioDevicelist(&alen); + sendUDP(appIP, UdpDataPort_ModemToApp, txdata, alen); + } + else + { + // appIP is fixed, answer only to this IP + if (!strcmp(appIP, rxip)) + { + //printf("app (%s) is searching modem. Sending modem IP to the app\n",appIP); + restart_modems = 1; + // App searches for the modem IP, mirror the received messages + // so the app gets an UDP message with this local IP + int alen; + uint8_t* txdata = getAudioDevicelist(&alen); + sendUDP(appIP, UdpDataPort_ModemToApp, txdata, alen); + } + } + } +} + +// called by UDP RX thread for data from App +void appdata_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock) +{ + uint8_t type = pdata[0]; + uint8_t minfo = pdata[1]; + + if (len != (PAYLOADLEN + 2)) + { + printf("data from app: wrong length:%d (should be %d)\n", len - 2, PAYLOADLEN); + return; + } + + // type values: see oscardata config.cs: frame types + if (type == 16) + { + // Byte 1 contains the resampler ratio for TX and RX modem + speedmode = pdata[1]; + printf("set speedmode to %d\n", speedmode); + restart_modems = 1; + return; + } + + if (type == 17) + { + // auto send file + // TODO + + // for testing only: + // simulate sending a text file with 1kB length + /*int testlen = 100000; + uint8_t arr[100000]; + char c = 'A'; + for (int i = 0; i < testlen; i++) + { + arr[i] = c; + if (++c > 'Z') c = 'A'; + } + arraySend(arr, testlen, 3, (char*)"testfile.txt");*/ + return; + } + if (type == 18) + { + // auto send folder + // TODO + } + + if (type == 19) + { + // shut down this modem PC + int r = system("sudo shutdown now"); + exit(r); + } + + if (type == 20) + { + // reset liquid RX modem + resetModem(); + } + + //if (getSending() == 1) return; // already sending (Array sending) + + if (minfo == 0) + { + // this is the first frame of a larger file + // send it multiple times, like a preamble, to give the + // receiver some time for synchronisation + // duration: 3 seconds + // caprate: samples/s. This are symbols: caprate/txinterpolfactor + // and bits: symbols * bitsPerSymbol + // and bytes/second: bits/8 = (caprate/txinterpolfactor) * bitsPerSymbol / 8 + // one frame has 258 bytes, so we need for 5s: 5* ((caprate/txinterpolfactor) * bitsPerSymbol / 8) /258 + 1 frames + int numframespreamble = 3 * ((caprate / txinterpolfactor) * bitsPerSymbol / 8) / 258 + 1; + for (int i = 0; i < numframespreamble; i++) + toGR_sendData(pdata + 2, type, minfo); + } + else if ((len - 2) < PAYLOADLEN) + { + // if not enough data for a full payload add Zeros + uint8_t payload[PAYLOADLEN]; + memset(payload, 0, PAYLOADLEN); + memcpy(payload, pdata + 2, len - 2); + toGR_sendData(payload, type, minfo); + } + else + { + toGR_sendData(pdata + 2, type, minfo); + } +} + +void toGR_sendData(uint8_t* data, int type, int status) +{ + int len = 0; + uint8_t* txdata = Pack(data, type, status, &len); + + //showbytestring((char *)"BERtx: ", txdata, len); + + if (txdata != NULL) + sendToModulator(txdata, len); +} + +// called by UDP RX thread or liquid demodulator for received data +void GRdata_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock) +{ + static int fnd = 0; + + // raw symbols + uint8_t* pl = unpack_data(pdata, len); + if (pl != NULL) + { + // complete frame received + // send payload to app + uint8_t txpl[PAYLOADLEN + 10 + 1]; + memcpy(txpl + 1, pl, PAYLOADLEN + 10); + txpl[0] = 1; // type 1: payload data follows + sendUDP(appIP, UdpDataPort_ModemToApp, txpl, PAYLOADLEN + 10 + 1); + fnd = 0; + } + else + { + // no frame found + // if longer ws seconds nothing found, reset liquid RX modem + // comes here with symbol rate, i.e. 4000 S/s + int ws = 2; + int wt = sr[speedmode].audio / sr[speedmode].tx; + if (++fnd >= (wt * ws)) + { + fnd = 0; + //printf("no signal detected %d, reset RX modem\n", wt); + resetModem(); + } + } +} diff --git a/hsmodem/hsmodem.h b/hsmodem/hsmodem.h new file mode 100755 index 0000000..fdc2a95 --- /dev/null +++ b/hsmodem/hsmodem.h @@ -0,0 +1,136 @@ + +#ifdef _WIN32 +#define _WIN32_ + // ignore senseless warnings invented by M$ to confuse developers + #pragma warning( disable : 4091 ) + #pragma warning( disable : 4003 ) +#else +#define _LINUX_ +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef _WIN32_ +#include "Winsock2.h" +#include "io.h" +#include +#include +#include +#include +#include +#include +#define _USE_MATH_DEFINES +#include + +#pragma comment(lib, "bass.lib") +#pragma comment(lib, "libliquid.lib") +#pragma comment(lib, "fftw_lib/libfftw3-3.lib") +#endif + +#ifdef _LINUX_ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#endif + +#include "bass.h" +#include "liquid.h" +#include "frameformat.h" +#include "fec.h" +#include "udp.h" + +#define jpg_tempfilename "rxdata.jpg" + +#define CRC16TX 0 +#define CRC16RX 1 +#define CRC16FILE 2 + +void init_packer(); +uint8_t* Pack(uint8_t* payload, int type, int status, int* plen); +uint8_t* unpack_data(uint8_t* rxd, int len); + +void convertBytesToSyms_QPSK(uint8_t* bytes, uint8_t* syms, int bytenum); +void convertBytesToSyms_8PSK(uint8_t* bytes, uint8_t* syms, int bytenum); + +uint8_t* convertQPSKSymToBytes(uint8_t* rxsymbols); +uint8_t* convert8PSKSymToBytes(uint8_t* rxsymbols, int len); + + +void rotateQPSKsyms(uint8_t* src, uint8_t* dst, int len); +void rotate8PSKsyms(uint8_t* src, uint8_t* dst, int len); +void rotate8APSKsyms(uint8_t* src, uint8_t* dst, int len); + +uint8_t* rotateBackQPSK(uint8_t* buf, int len, int rotations); +uint8_t* rotateBack8PSK(uint8_t* buf, int len, int rotations); +uint8_t* rotateBack8APSK(uint8_t* buf, int len, int rotations); + +void TX_Scramble(uint8_t* data, int len); +uint8_t* RX_Scramble(uint8_t* data, int len); +uint16_t Crc16_messagecalc(int rxtx, uint8_t* data, int len); + +void showbytestring(char* title, uint8_t* data, int anz); +void measure_speed(int len); + +void initFEC(); +void GetFEC(uint8_t* txblock, int len, uint8_t* destArray); +int cfec_Reconstruct(uint8_t* darr, uint8_t* destination); + +int init_audio(int pbdev, int capdev); +int pb_fifo_freespace(int nolock); +void pb_write_fifo_clear(); +void pb_write_fifo(float sample); +int cap_read_fifo(float* data); +uint8_t* getAudioDevicelist(int* len); + +void sleep_ms(int ms); +void GRdata_rxdata(uint8_t* pdata, int len, struct sockaddr_in* rxsock); + +void modulator(uint8_t sym_in); +void init_dsp(); +int demodulator(); +void sendToModulator(uint8_t* d, int len); +void resetModem(); +void close_dsp(); +void init_fft(); +void exit_fft(); +void showbytestringf(char* title, float* data, int anz); +uint16_t* make_waterfall(float fre, int* retlen); + + +extern int speedmode; +extern int bitsPerSymbol; +extern int constellationSize; +extern int speed; +extern int keeprunning; +extern int caprate; +extern int BC_sock_AppToModem; +extern int UdpDataPort_ModemToApp; +extern int txinterpolfactor; +extern int rxPreInterpolfactor; +extern char appIP[20]; + + +#ifdef _LINUX_ +int isRunning(char* prgname); +void install_signal_handler(); +int isRunning(char* prgname); +#endif diff --git a/hsmodem/hsmodem.vcxproj b/hsmodem/hsmodem.vcxproj new file mode 100755 index 0000000..1eb77cb --- /dev/null +++ b/hsmodem/hsmodem.vcxproj @@ -0,0 +1,247 @@ + + + + + 64bit + Win32 + + + 64bit + x64 + + + Debug + Win32 + + + Debug + x64 + + + Release + Win32 + + + Release + x64 + + + + {E6292FAA-E794-4107-BD89-2310BCDBC858} + Win32Proj + hsmodem + 8.1 + hsmodem + + + + Application + true + v140_xp + NotSet + + + Application + true + v140_xp + NotSet + + + Application + false + v140_xp + true + NotSet + + + Application + false + v140 + true + NotSet + + + Application + false + v140 + true + NotSet + + + Application + false + v140 + true + NotSet + + + + + + + + + + + + + + + + + + + + + + + + + true + + + true + + + false + ..\WinRelease\ + $(Configuration)\ + + + false + + + false + ..\..\Release\ + $(Configuration)\ + + + false + + + + + + Level3 + Disabled + WIN32;_DEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions) + + + Console + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + + + Level3 + Disabled + WIN32;_DEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions) + + + Console + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + + + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) + + + Console + true + true + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + + + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) + + + Console + true + true + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + + + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) + + + Console + true + true + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + + + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_LIB;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) + + + Console + true + true + true + wsock32.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/hsmodem/hsmodem.vcxproj.filters b/hsmodem/hsmodem.vcxproj.filters new file mode 100755 index 0000000..0f4731c --- /dev/null +++ b/hsmodem/hsmodem.vcxproj.filters @@ -0,0 +1,78 @@ + + + + + {4FC737F1-C7A5-4376-A066-2A32D752A2FF} + cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx + + + {93995380-89BD-4b04-88EB-625FBE52EBFB} + h;hh;hpp;hxx;hm;inl;inc;xsd + + + {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} + rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms + + + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + \ No newline at end of file diff --git a/hsmodem/hsmodem.vcxproj.user b/hsmodem/hsmodem.vcxproj.user new file mode 100755 index 0000000..08593da --- /dev/null +++ b/hsmodem/hsmodem.vcxproj.user @@ -0,0 +1,7 @@ + + + + $(LocalDebuggerEnvironment) + WindowsLocalDebugger + + \ No newline at end of file diff --git a/hsmodem/libliquid.lib b/hsmodem/libliquid.lib new file mode 100755 index 0000000..ab1122c Binary files /dev/null and b/hsmodem/libliquid.lib differ diff --git a/hsmodem/liquid.h b/hsmodem/liquid.h new file mode 100755 index 0000000..1bb3061 --- /dev/null +++ b/hsmodem/liquid.h @@ -0,0 +1,8823 @@ +/* + * Copyright (c) 2007 - 2020 Joseph Gaeddert + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#ifndef __LIQUID_H__ +#define __LIQUID_H__ + +#ifdef __cplusplus +extern "C" { +# define LIQUID_USE_COMPLEX_H 0 +#else +# define LIQUID_USE_COMPLEX_H 1 +#endif // __cplusplus + +// common headers +#include + +// +// Make sure the version and version number macros weren't defined by +// some prevoiusly included header file. +// +#ifdef LIQUID_VERSION +# undef LIQUID_VERSION +#endif +#ifdef LIQUID_VERSION_NUMBER +# undef LIQUID_VERSION_NUMBER +#endif + +// +// Compile-time version numbers +// +// LIQUID_VERSION = "X.Y.Z" +// LIQUID_VERSION_NUMBER = (X*1000000 + Y*1000 + Z) +// +#define LIQUID_VERSION "1.3.2" +#define LIQUID_VERSION_NUMBER 1003002 + +// +// Run-time library version numbers +// +extern const char liquid_version[]; +const char * liquid_libversion(void); +int liquid_libversion_number(void); + +// run-time library validation +#define LIQUID_VALIDATE_LIBVERSION \ + if (LIQUID_VERSION_NUMBER != liquid_libversion_number()) { \ + fprintf(stderr,"%s:%u: ", __FILE__,__LINE__); \ + fprintf(stderr,"error: invalid liquid runtime library\n"); \ + exit(1); \ + } \ + +// basic error types +#define LIQUID_NUM_ERRORS 12 +typedef enum { + // everything ok + LIQUID_OK=0, + + // internal logic error; this is a bug with liquid and should be reported immediately + LIQUID_EINT, + + // invalid object, examples: + // - destroy() method called on NULL pointer + LIQUID_EIOBJ, + + // invalid parameter, or configuration; examples: + // - setting bandwidth of a filter to a negative number + // - setting FFT size to zero + // - create a spectral periodogram object with window size greater than nfft + LIQUID_EICONFIG, + + // input out of range; examples: + // - try to take log of -1 + // - try to create an FFT plan of size zero + LIQUID_EIVAL, + + // invalid vector length or dimension; examples + // - trying to refer to the 17th element of a 2 x 2 matrix + // - trying to multiply two matrices of incompatible dimensions + LIQUID_EIRANGE, + + // invalid mode; examples: + // - try to create a modem of type 'LIQUID_MODEM_XXX' which does not exit + LIQUID_EIMODE, + + // unsupported mode (e.g. LIQUID_FEC_CONV_V27 with 'libfec' not installed) + LIQUID_EUMODE, + + // object has not been created or properly initialized + // - try to run firfilt_crcf_execute(NULL, ...) + // - try to modulate using an arbitrary modem without initializing the constellation + LIQUID_ENOINIT, + + // not enough memory allocated for operation; examples: + // - try to factor 100 = 2*2*5*5 but only give 3 spaces for factors + LIQUID_EIMEM, + + // file input/output; examples: + // - could not open a file for writing because of insufficient permissions + // - could not open a file for reading because it does not exist + // - try to read more data than a file has space for + // - could not parse line in file (improper formatting) + LIQUID_EIO, + +} liquid_error_code; + +// error descriptions +extern const char * liquid_error_str[LIQUID_NUM_ERRORS]; +const char * liquid_error_info(liquid_error_code _code); + +#define LIQUID_CONCAT(prefix, name) prefix ## name +#define LIQUID_VALIDATE_INPUT + +/* + * Compile-time complex data type definitions + * + * Default: use the C99 complex data type, otherwise + * define complex type compatible with the C++ complex standard, + * otherwise resort to defining binary compatible array. + */ +#if LIQUID_USE_COMPLEX_H==1 +# include +# define LIQUID_DEFINE_COMPLEX(R,C) typedef R _Complex C +#elif defined _GLIBCXX_COMPLEX || defined _LIBCPP_COMPLEX +# define LIQUID_DEFINE_COMPLEX(R,C) typedef std::complex C +#else +# define LIQUID_DEFINE_COMPLEX(R,C) typedef struct {R real; R imag;} C; +#endif +//# define LIQUID_DEFINE_COMPLEX(R,C) typedef R C[2] + +LIQUID_DEFINE_COMPLEX(float, liquid_float_complex); +LIQUID_DEFINE_COMPLEX(double, liquid_double_complex); + +// +// MODULE : agc (automatic gain control) +// + +// available squelch modes +typedef enum { + LIQUID_AGC_SQUELCH_UNKNOWN=0, // unknown/unavailable squelch mode + LIQUID_AGC_SQUELCH_ENABLED, // squelch enabled but signal not detected + LIQUID_AGC_SQUELCH_RISE, // signal first hit/exceeded threshold + LIQUID_AGC_SQUELCH_SIGNALHI, // signal level high (above threshold) + LIQUID_AGC_SQUELCH_FALL, // signal first dropped below threshold + LIQUID_AGC_SQUELCH_SIGNALLO, // signal level low (below threshold) + LIQUID_AGC_SQUELCH_TIMEOUT, // signal level low (below threshold for a certain time) + LIQUID_AGC_SQUELCH_DISABLED, // squelch not enabled +} agc_squelch_mode; + +#define LIQUID_AGC_MANGLE_CRCF(name) LIQUID_CONCAT(agc_crcf, name) +#define LIQUID_AGC_MANGLE_RRRF(name) LIQUID_CONCAT(agc_rrrf, name) + +// large macro +// AGC : name-mangling macro +// T : primitive data type +// TC : input/output data type +#define LIQUID_AGC_DEFINE_API(AGC,T,TC) \ + \ +/* Automatic gain control (agc) for level correction and signal */ \ +/* detection */ \ +typedef struct AGC(_s) * AGC(); \ + \ +/* Create automatic gain control object. */ \ +AGC() AGC(_create)(void); \ + \ +/* Destroy object, freeing all internally-allocated memory. */ \ +int AGC(_destroy)(AGC() _q); \ + \ +/* Print object properties to stdout, including received signal */ \ +/* strength indication (RSSI), loop bandwidth, lock status, and squelch */ \ +/* status. */ \ +int AGC(_print)(AGC() _q); \ + \ +/* Reset internal state of agc object, including gain estimate, input */ \ +/* signal level estimate, lock status, and squelch mode */ \ +/* If the squelch mode is disabled, it stays disabled, but all enabled */ \ +/* modes (e.g. LIQUID_AGC_SQUELCH_TIMEOUT) resets to just */ \ +/* LIQUID_AGC_SQUELCH_ENABLED. */ \ +int AGC(_reset)(AGC() _q); \ + \ +/* Execute automatic gain control on an single input sample */ \ +/* _q : automatic gain control object */ \ +/* _x : input sample */ \ +/* _y : output sample */ \ +int AGC(_execute)(AGC() _q, \ + TC _x, \ + TC * _y); \ + \ +/* Execute automatic gain control on block of samples pointed to by _x */ \ +/* and store the result in the array of the same length _y. */ \ +/* _q : automatic gain control object */ \ +/* _x : input data array, [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _y : output data array, [size: _n x 1] */ \ +int AGC(_execute_block)(AGC() _q, \ + TC * _x, \ + unsigned int _n, \ + TC * _y); \ + \ +/* Lock agc object. When locked, the agc object still makes an estimate */ \ +/* of the signal level, but the gain setting is fixed and does not */ \ +/* change. */ \ +/* This is useful for providing coarse input signal level correction */ \ +/* and quickly detecting a packet burst but not distorting signals with */ \ +/* amplitude variation due to modulation. */ \ +int AGC(_lock)(AGC() _q); \ + \ +/* Unlock agc object, and allow amplitude correction to resume. */ \ +int AGC(_unlock)(AGC() _q); \ + \ +/* Set loop filter bandwidth: attack/release time. */ \ +/* _q : automatic gain control object */ \ +/* _bt : bandwidth-time constant, _bt > 0 */ \ +int AGC(_set_bandwidth)(AGC() _q, float _bt); \ + \ +/* Get the agc object's loop filter bandwidth. */ \ +float AGC(_get_bandwidth)(AGC() _q); \ + \ +/* Get the input signal's estimated energy level, relative to unity. */ \ +/* The result is a linear value. */ \ +float AGC(_get_signal_level)(AGC() _q); \ + \ +/* Set the agc object's estimate of the input signal by specifying an */ \ +/* explicit linear value. This is useful for initializing the agc */ \ +/* object with a preliminary estimate of the signal level to help gain */ \ +/* convergence. */ \ +/* _q : automatic gain control object */ \ +/* _x2 : signal level of input, _x2 > 0 */ \ +int AGC(_set_signal_level)(AGC() _q, \ + float _x2); \ + \ +/* Get the agc object's estimated received signal strength indication */ \ +/* (RSSI) on the input signal. */ \ +/* This is similar to getting the signal level (above), but returns the */ \ +/* result in dB rather than on a linear scale. */ \ +float AGC(_get_rssi)(AGC() _q); \ + \ +/* Set the agc object's estimated received signal strength indication */ \ +/* (RSSI) on the input signal by specifying an explicit value in dB. */ \ +/* _q : automatic gain control object */ \ +/* _rssi : signal level of input [dB] */ \ +int AGC(_set_rssi)(AGC() _q, float _rssi); \ + \ +/* Get the gain value currently being applied to the input signal */ \ +/* (linear). */ \ +float AGC(_get_gain)(AGC() _q); \ + \ +/* Set the agc object's internal gain by specifying an explicit linear */ \ +/* value. */ \ +/* _q : automatic gain control object */ \ +/* _gain : gain to apply to input signal, _gain > 0 */ \ +int AGC(_set_gain)(AGC() _q, \ + float _gain); \ + \ +/* Get the ouput scaling applied to each sample (linear). */ \ +float AGC(_get_scale)(AGC() _q); \ + \ +/* Set the agc object's output scaling (linear). Note that this does */ \ +/* affect the response of the AGC. */ \ +/* _q : automatic gain control object */ \ +/* _gain : gain to apply to input signal, _gain > 0 */ \ +int AGC(_set_scale)(AGC() _q, \ + float _scale); \ + \ +/* Estimate signal level and initialize internal gain on an input */ \ +/* array. */ \ +/* _q : automatic gain control object */ \ +/* _x : input data array, [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +int AGC(_init)(AGC() _q, \ + TC * _x, \ + unsigned int _n); \ + \ +/* Enable squelch mode. */ \ +int AGC(_squelch_enable)(AGC() _q); \ + \ +/* Disable squelch mode. */ \ +int AGC(_squelch_disable)(AGC() _q); \ + \ +/* Return flag indicating if squelch is enabled or not. */ \ +int AGC(_squelch_is_enabled)(AGC() _q); \ + \ +/* Set threshold for enabling/disabling squelch. */ \ +/* _q : automatic gain control object */ \ +/* _thresh : threshold for enabling squelch [dB] */ \ +int AGC(_squelch_set_threshold)(AGC() _q, \ + T _thresh); \ + \ +/* Get squelch threshold (value in dB) */ \ +T AGC(_squelch_get_threshold)(AGC() _q); \ + \ +/* Set timeout before enabling squelch. */ \ +/* _q : automatic gain control object */ \ +/* _timeout : timeout before enabling squelch [samples] */ \ +int AGC(_squelch_set_timeout)(AGC() _q, \ + unsigned int _timeout); \ + \ +/* Get squelch timeout (number of samples) */ \ +unsigned int AGC(_squelch_get_timeout)(AGC() _q); \ + \ +/* Get squelch status (e.g. LIQUID_AGC_SQUELCH_TIMEOUT) */ \ +int AGC(_squelch_get_status)(AGC() _q); \ + +// Define agc APIs +LIQUID_AGC_DEFINE_API(LIQUID_AGC_MANGLE_CRCF, float, liquid_float_complex) +LIQUID_AGC_DEFINE_API(LIQUID_AGC_MANGLE_RRRF, float, float) + + + +// +// MODULE : audio +// + +// CVSD: continuously variable slope delta +typedef struct cvsd_s * cvsd; + +// create cvsd object +// _num_bits : number of adjacent bits to observe (4 recommended) +// _zeta : slope adjustment multiplier (1.5 recommended) +// _alpha : pre-/post-emphasis filter coefficient (0.9 recommended) +// NOTE: _alpha must be in [0,1] +cvsd cvsd_create(unsigned int _num_bits, + float _zeta, + float _alpha); + +// destroy cvsd object +void cvsd_destroy(cvsd _q); + +// print cvsd object parameters +void cvsd_print(cvsd _q); + +// encode/decode single sample +unsigned char cvsd_encode(cvsd _q, float _audio_sample); +float cvsd_decode(cvsd _q, unsigned char _bit); + +// encode/decode 8 samples at a time +void cvsd_encode8(cvsd _q, float * _audio, unsigned char * _data); +void cvsd_decode8(cvsd _q, unsigned char _data, float * _audio); + + +// +// MODULE : buffer +// + +// circular buffer +#define LIQUID_CBUFFER_MANGLE_FLOAT(name) LIQUID_CONCAT(cbufferf, name) +#define LIQUID_CBUFFER_MANGLE_CFLOAT(name) LIQUID_CONCAT(cbuffercf, name) + +// large macro +// CBUFFER : name-mangling macro +// T : data type +#define LIQUID_CBUFFER_DEFINE_API(CBUFFER,T) \ + \ +/* Circular buffer object for storing and retrieving samples in a */ \ +/* first-in/first-out (FIFO) manner using a minimal amount of memory */ \ +typedef struct CBUFFER(_s) * CBUFFER(); \ + \ +/* Create circular buffer object of a particular maximum storage length */ \ +/* _max_size : maximum buffer size, _max_size > 0 */ \ +CBUFFER() CBUFFER(_create)(unsigned int _max_size); \ + \ +/* Create circular buffer object of a particular maximum storage size */ \ +/* and specify the maximum number of elements that can be read at any */ \ +/* any given time */ \ +/* _max_size : maximum buffer size, _max_size > 0 */ \ +/* _max_read : maximum size that will be read from buffer */ \ +CBUFFER() CBUFFER(_create_max)(unsigned int _max_size, \ + unsigned int _max_read); \ + \ +/* Destroy cbuffer object, freeing all internal memory */ \ +void CBUFFER(_destroy)(CBUFFER() _q); \ + \ +/* Print cbuffer object properties to stdout */ \ +void CBUFFER(_print)(CBUFFER() _q); \ + \ +/* Print cbuffer object properties and internal state */ \ +void CBUFFER(_debug_print)(CBUFFER() _q); \ + \ +/* Clear internal buffer */ \ +void CBUFFER(_reset)(CBUFFER() _q); \ + \ +/* Get the number of elements currently in the buffer */ \ +unsigned int CBUFFER(_size)(CBUFFER() _q); \ + \ +/* Get the maximum number of elements the buffer can hold */ \ +unsigned int CBUFFER(_max_size)(CBUFFER() _q); \ + \ +/* Get the maximum number of elements you may read at once */ \ +unsigned int CBUFFER(_max_read)(CBUFFER() _q); \ + \ +/* Get the number of available slots (max_size - size) */ \ +unsigned int CBUFFER(_space_available)(CBUFFER() _q); \ + \ +/* Return flag indicating if the buffer is full or not */ \ +int CBUFFER(_is_full)(CBUFFER() _q); \ + \ +/* Write a single sample into the buffer */ \ +/* _q : circular buffer object */ \ +/* _v : input sample */ \ +void CBUFFER(_push)(CBUFFER() _q, \ + T _v); \ + \ +/* Write a block of samples to the buffer */ \ +/* _q : circular buffer object */ \ +/* _v : array of samples to write to buffer */ \ +/* _n : number of samples to write */ \ +void CBUFFER(_write)(CBUFFER() _q, \ + T * _v, \ + unsigned int _n); \ + \ +/* Remove and return a single element from the buffer by setting the */ \ +/* value of the output sample pointed to by _v */ \ +/* _q : circular buffer object */ \ +/* _v : pointer to sample output */ \ +void CBUFFER(_pop)(CBUFFER() _q, \ + T * _v); \ + \ +/* Read buffer contents by returning a pointer to the linearized array; */ \ +/* note that the returned pointer is only valid until another operation */ \ +/* is performed on the circular buffer object */ \ +/* _q : circular buffer object */ \ +/* _num_requested : number of elements requested */ \ +/* _v : output pointer */ \ +/* _num_read : number of elements referenced by _v */ \ +void CBUFFER(_read)(CBUFFER() _q, \ + unsigned int _num_requested, \ + T ** _v, \ + unsigned int * _num_read); \ + \ +/* Release _n samples from the buffer */ \ +/* _q : circular buffer object */ \ +/* _n : number of elements to release */ \ +void CBUFFER(_release)(CBUFFER() _q, \ + unsigned int _n); \ + +// Define buffer APIs +LIQUID_CBUFFER_DEFINE_API(LIQUID_CBUFFER_MANGLE_FLOAT, float) +LIQUID_CBUFFER_DEFINE_API(LIQUID_CBUFFER_MANGLE_CFLOAT, liquid_float_complex) + + + +// Windowing functions +#define LIQUID_WINDOW_MANGLE_FLOAT(name) LIQUID_CONCAT(windowf, name) +#define LIQUID_WINDOW_MANGLE_CFLOAT(name) LIQUID_CONCAT(windowcf, name) + +// large macro +// WINDOW : name-mangling macro +// T : data type +#define LIQUID_WINDOW_DEFINE_API(WINDOW,T) \ + \ +/* Sliding window first-in/first-out buffer with a fixed size */ \ +typedef struct WINDOW(_s) * WINDOW(); \ + \ +/* Create window buffer object of a fixed length */ \ +WINDOW() WINDOW(_create)(unsigned int _n); \ + \ +/* Recreate window buffer object with new length. */ \ +/* This extends an existing window's size, similar to the standard C */ \ +/* library's realloc() to n samples. */ \ +/* If the size of the new window is larger than the old one, the newest */ \ +/* values are retained at the beginning of the buffer and the oldest */ \ +/* values are truncated. If the size of the new window is smaller than */ \ +/* the old one, the oldest values are truncated. */ \ +/* _q : old window object */ \ +/* _n : new window length */ \ +WINDOW() WINDOW(_recreate)(WINDOW() _q, unsigned int _n); \ + \ +/* Destroy window object, freeing all internally memory */ \ +int WINDOW(_destroy)(WINDOW() _q); \ + \ +/* Print window object to stdout */ \ +int WINDOW(_print)(WINDOW() _q); \ + \ +/* Print window object to stdout (with extra information) */ \ +int WINDOW(_debug_print)(WINDOW() _q); \ + \ +/* Reset window object (initialize to zeros) */ \ +int WINDOW(_reset)(WINDOW() _q); \ + \ +/* Read the contents of the window by returning a pointer to the */ \ +/* aligned internal memory array. This method guarantees that the */ \ +/* elements are linearized. This method should only be used for */ \ +/* reading; writing values to the buffer has unspecified results. */ \ +/* Note that the returned pointer is only valid until another operation */ \ +/* is performed on the window buffer object */ \ +/* _q : window object */ \ +/* _v : output pointer (set to internal array) */ \ +int WINDOW(_read)(WINDOW() _q, \ + T ** _v); \ + \ +/* Index single element in buffer at a particular index */ \ +/* This retrieves the \(i^{th}\) sample in the window, storing the */ \ +/* output value in _v. */ \ +/* This is equivalent to first invoking read() and then indexing on the */ \ +/* resulting pointer; however the result is obtained much faster. */ \ +/* Therefore setting the index to 0 returns the oldest value in the */ \ +/* window. */ \ +/* _q : window object */ \ +/* _i : index of element to read */ \ +/* _v : output value pointer */ \ +int WINDOW(_index)(WINDOW() _q, \ + unsigned int _i, \ + T * _v); \ + \ +/* Shifts a single sample into the right side of the window, pushing */ \ +/* the oldest (left-most) sample out of the end. Unlike stacks, the */ \ +/* window object has no equivalent "pop" method, as values are retained */ \ +/* in memory until they are overwritten. */ \ +/* _q : window object */ \ +/* _v : single input element */ \ +int WINDOW(_push)(WINDOW() _q, \ + T _v); \ + \ +/* Write array of elements onto window buffer */ \ +/* Effectively, this is equivalent to pushing each sample one at a */ \ +/* time, but executes much faster. */ \ +/* _q : window object */ \ +/* _v : input array of values to write */ \ +/* _n : number of input values to write */ \ +int WINDOW(_write)(WINDOW() _q, \ + T * _v, \ + unsigned int _n); \ + +// Define window APIs +LIQUID_WINDOW_DEFINE_API(LIQUID_WINDOW_MANGLE_FLOAT, float) +LIQUID_WINDOW_DEFINE_API(LIQUID_WINDOW_MANGLE_CFLOAT, liquid_float_complex) +//LIQUID_WINDOW_DEFINE_API(LIQUID_WINDOW_MANGLE_UINT, unsigned int) + + +// wdelay functions : windowed-delay +// Implements an efficient z^-k delay with minimal memory +#define LIQUID_WDELAY_MANGLE_FLOAT(name) LIQUID_CONCAT(wdelayf, name) +#define LIQUID_WDELAY_MANGLE_CFLOAT(name) LIQUID_CONCAT(wdelaycf, name) +//#define LIQUID_WDELAY_MANGLE_UINT(name) LIQUID_CONCAT(wdelayui, name) + +// large macro +// WDELAY : name-mangling macro +// T : data type +#define LIQUID_WDELAY_DEFINE_API(WDELAY,T) \ + \ +/* Efficient digital delay line using a minimal amount of memory */ \ +typedef struct WDELAY(_s) * WDELAY(); \ + \ +/* Create delay buffer object with a particular number of samples of */ \ +/* delay */ \ +/* _delay : number of samples of delay in the wdelay object */ \ +WDELAY() WDELAY(_create)(unsigned int _delay); \ + \ +/* Re-create delay buffer object, adjusting the delay size, preserving */ \ +/* the internal state of the object */ \ +/* _q : old delay buffer object */ \ +/* _delay : delay for new object */ \ +WDELAY() WDELAY(_recreate)(WDELAY() _q, \ + unsigned int _delay); \ + \ +/* Destroy delay buffer object, freeing internal memory */ \ +void WDELAY(_destroy)(WDELAY() _q); \ + \ +/* Print delay buffer object's state to stdout */ \ +void WDELAY(_print)(WDELAY() _q); \ + \ +/* Clear/reset state of object */ \ +void WDELAY(_reset)(WDELAY() _q); \ + \ +/* Read delayed sample at the head of the buffer and store it to the */ \ +/* output pointer */ \ +/* _q : delay buffer object */ \ +/* _v : value of delayed element */ \ +void WDELAY(_read)(WDELAY() _q, \ + T * _v); \ + \ +/* Push new sample into delay buffer object */ \ +/* _q : delay buffer object */ \ +/* _v : new value to be added to buffer */ \ +void WDELAY(_push)(WDELAY() _q, \ + T _v); \ + +// Define wdelay APIs +LIQUID_WDELAY_DEFINE_API(LIQUID_WDELAY_MANGLE_FLOAT, float) +LIQUID_WDELAY_DEFINE_API(LIQUID_WDELAY_MANGLE_CFLOAT, liquid_float_complex) +//LIQUID_WDELAY_DEFINE_API(LIQUID_WDELAY_MANGLE_UINT, unsigned int) + + + +// +// MODULE : channel +// + +#define LIQUID_CHANNEL_MANGLE_CCCF(name) LIQUID_CONCAT(channel_cccf,name) + +// large macro +// CHANNEL : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_CHANNEL_DEFINE_API(CHANNEL,TO,TC,TI) \ + \ +/* Channel emulation */ \ +typedef struct CHANNEL(_s) * CHANNEL(); \ + \ +/* Create channel object with default parameters */ \ +CHANNEL() CHANNEL(_create)(void); \ + \ +/* Destroy channel object, freeing all internal memory */ \ +int CHANNEL(_destroy)(CHANNEL() _q); \ + \ +/* Print channel object internals to standard output */ \ +int CHANNEL(_print)(CHANNEL() _q); \ + \ +/* Include additive white Gausss noise impairment */ \ +/* _q : channel object */ \ +/* _N0dB : noise floor power spectral density [dB] */ \ +/* _SNRdB : signal-to-noise ratio [dB] */ \ +int CHANNEL(_add_awgn)(CHANNEL() _q, \ + float _N0dB, \ + float _SNRdB); \ + \ +/* Include carrier offset impairment */ \ +/* _q : channel object */ \ +/* _frequency : carrier frequency offset [radians/sample] */ \ +/* _phase : carrier phase offset [radians] */ \ +int CHANNEL(_add_carrier_offset)(CHANNEL() _q, \ + float _frequency, \ + float _phase); \ + \ +/* Include multi-path channel impairment */ \ +/* _q : channel object */ \ +/* _h : channel coefficients (NULL for random) */ \ +/* _h_len : number of channel coefficients */ \ +int CHANNEL(_add_multipath)(CHANNEL() _q, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Include slowly-varying shadowing impairment */ \ +/* _q : channel object */ \ +/* _sigma : standard deviation for log-normal shadowing */ \ +/* _fd : Doppler frequency, 0 <= _fd < 0.5 */ \ +int CHANNEL(_add_shadowing)(CHANNEL() _q, \ + float _sigma, \ + float _fd); \ + \ +/* Apply channel impairments on single input sample */ \ +/* _q : channel object */ \ +/* _x : input sample */ \ +/* _y : pointer to output sample */ \ +int CHANNEL(_execute)(CHANNEL() _q, \ + TI _x, \ + TO * _y); \ + \ +/* Apply channel impairments on block of samples */ \ +/* _q : channel object */ \ +/* _x : input array, [size: _n x 1] */ \ +/* _n : input array, length */ \ +/* _y : output array, [size: _n x 1] */ \ +int CHANNEL(_execute_block)(CHANNEL() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_CHANNEL_DEFINE_API(LIQUID_CHANNEL_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// time-varying multi-path channel +// +#define LIQUID_TVMPCH_MANGLE_CCCF(name) LIQUID_CONCAT(tvmpch_cccf,name) + +// large macro +// TVMPCH : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_TVMPCH_DEFINE_API(TVMPCH,TO,TC,TI) \ + \ +/* Time-varying multipath channel emulation */ \ +typedef struct TVMPCH(_s) * TVMPCH(); \ + \ +/* Create time-varying multi-path channel emulator object, specifying */ \ +/* the number of coefficients, the standard deviation of coefficients, */ \ +/* and the coherence time. The larger the standard deviation, the more */ \ +/* dramatic the frequency response of the channel. The shorter the */ \ +/* coeherent time, the faster the channel effects. */ \ +/* _n : number of coefficients, _n > 0 */ \ +/* _std : standard deviation, _std >= 0 */ \ +/* _tau : normalized coherence time, 0 < _tau < 1 */ \ +TVMPCH() TVMPCH(_create)(unsigned int _n, \ + float _std, \ + float _tau); \ + \ +/* Destroy channel object, freeing all internal memory */ \ +int TVMPCH(_destroy)(TVMPCH() _q); \ + \ +/* Reset object */ \ +int TVMPCH(_reset)(TVMPCH() _q); \ + \ +/* Print channel object internals to standard output */ \ +int TVMPCH(_print)(TVMPCH() _q); \ + \ +/* Push sample into emulator */ \ +/* _q : channel object */ \ +/* _x : input sample */ \ +int TVMPCH(_push)(TVMPCH() _q, \ + TI _x); \ + \ +/* Compute output sample */ \ +/* _q : channel object */ \ +/* _y : output sample */ \ +int TVMPCH(_execute)(TVMPCH() _q, \ + TO * _y); \ + \ +/* Apply channel impairments on a block of samples */ \ +/* _q : channel object */ \ +/* _x : input array, [size: _n x 1] */ \ +/* _n : input array length */ \ +/* _y : output array, [size: _n x 1] */ \ +int TVMPCH(_execute_block)(TVMPCH() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_TVMPCH_DEFINE_API(LIQUID_TVMPCH_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// MODULE : dotprod (vector dot product) +// + +#define LIQUID_DOTPROD_MANGLE_RRRF(name) LIQUID_CONCAT(dotprod_rrrf,name) +#define LIQUID_DOTPROD_MANGLE_CCCF(name) LIQUID_CONCAT(dotprod_cccf,name) +#define LIQUID_DOTPROD_MANGLE_CRCF(name) LIQUID_CONCAT(dotprod_crcf,name) + +// large macro +// DOTPROD : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_DOTPROD_DEFINE_API(DOTPROD,TO,TC,TI) \ + \ +/* Vector dot product operation */ \ +typedef struct DOTPROD(_s) * DOTPROD(); \ + \ +/* Run dot product without creating object. This is less efficient than */ \ +/* creating the object as it is an unoptimized portable implementation */ \ +/* that doesn't take advantage of processor extensions. It is meant to */ \ +/* provide a baseline for performance comparison and a convenient way */ \ +/* to invoke a dot product operation when fast operation is not */ \ +/* necessary. */ \ +/* _v : coefficients array [size: _n x 1] */ \ +/* _x : input array [size: _n x 1] */ \ +/* _n : dotprod length, _n > 0 */ \ +/* _y : output sample pointer */ \ +void DOTPROD(_run)( TC * _v, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* This provides the same unoptimized operation as the 'run()' method */ \ +/* above, but with the loop unrolled by a factor of 4. It is marginally */ \ +/* faster than 'run()' without unrolling the loop. */ \ +/* _v : coefficients array [size: _n x 1] */ \ +/* _x : input array [size: _n x 1] */ \ +/* _n : dotprod length, _n > 0 */ \ +/* _y : output sample pointer */ \ +void DOTPROD(_run4)( TC * _v, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Create vector dot product object */ \ +/* _v : coefficients array [size: _n x 1] */ \ +/* _n : dotprod length, _n > 0 */ \ +DOTPROD() DOTPROD(_create)(TC * _v, \ + unsigned int _n); \ + \ +/* Re-create dot product object of potentially a different length with */ \ +/* different coefficients. If the length of the dot product object does */ \ +/* not change, not memory reallocation is invoked. */ \ +/* _q : old dotprod object */ \ +/* _v : coefficients array [size: _n x 1] */ \ +/* _n : dotprod length, _n > 0 */ \ +DOTPROD() DOTPROD(_recreate)(DOTPROD() _q, \ + TC * _v, \ + unsigned int _n); \ + \ +/* Destroy dotprod object, freeing all internal memory */ \ +void DOTPROD(_destroy)(DOTPROD() _q); \ + \ +/* Print dotprod object internals to standard output */ \ +void DOTPROD(_print)(DOTPROD() _q); \ + \ +/* Execute dot product on an input array */ \ +/* _q : dotprod object */ \ +/* _x : input array [size: _n x 1] */ \ +/* _y : output sample pointer */ \ +void DOTPROD(_execute)(DOTPROD() _q, \ + TI * _x, \ + TO * _y); \ + +LIQUID_DOTPROD_DEFINE_API(LIQUID_DOTPROD_MANGLE_RRRF, + float, + float, + float) + +LIQUID_DOTPROD_DEFINE_API(LIQUID_DOTPROD_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +LIQUID_DOTPROD_DEFINE_API(LIQUID_DOTPROD_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +// +// sum squared methods +// + +float liquid_sumsqf(float * _v, + unsigned int _n); + +float liquid_sumsqcf(liquid_float_complex * _v, + unsigned int _n); + + +// +// MODULE : equalization +// + +// least mean-squares (LMS) +#define LIQUID_EQLMS_MANGLE_RRRF(name) LIQUID_CONCAT(eqlms_rrrf,name) +#define LIQUID_EQLMS_MANGLE_CCCF(name) LIQUID_CONCAT(eqlms_cccf,name) + +// large macro +// EQLMS : name-mangling macro +// T : data type +#define LIQUID_EQLMS_DEFINE_API(EQLMS,T) \ + \ +/* Least mean-squares equalization object */ \ +typedef struct EQLMS(_s) * EQLMS(); \ + \ +/* Create LMS EQ initialized with external coefficients */ \ +/* _h : filter coefficients; set to NULL for {1,0,0...},[size: _n x 1] */ \ +/* _n : filter length */ \ +EQLMS() EQLMS(_create)(T * _h, \ + unsigned int _n); \ + \ +/* Create LMS EQ initialized with square-root Nyquist prototype filter */ \ +/* as initial set of coefficients. This is useful for applications */ \ +/* where the baseline matched filter is a good starting point, but */ \ +/* where equalization is needed to properly remove inter-symbol */ \ +/* interference. */ \ +/* The filter length is \(2 k m + 1\) */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _k : samples/symbol */ \ +/* _m : filter delay (symbols) */ \ +/* _beta : rolloff factor (0 < beta <= 1) */ \ +/* _dt : fractional sample delay */ \ +EQLMS() EQLMS(_create_rnyquist)(int _type, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + float _dt); \ + \ +/* Create LMS EQ initialized with low-pass filter */ \ +/* _n : filter length */ \ +/* _fc : filter cut-off normalized to sample rate, 0 < _fc <= 0.5 */ \ +EQLMS() EQLMS(_create_lowpass)(unsigned int _n, \ + float _fc); \ + \ +/* Re-create EQ initialized with external coefficients */ \ +/* _q : equalizer object */ \ +/* _h : filter coefficients (NULL for {1,0,0...}), [size: _n x 1] */ \ +/* _h_len : filter length */ \ +EQLMS() EQLMS(_recreate)(EQLMS() _q, \ + T * _h, \ + unsigned int _h_len); \ + \ +/* Destroy equalizer object, freeing all internal memory */ \ +int EQLMS(_destroy)(EQLMS() _q); \ + \ +/* Reset equalizer object, clearing internal state */ \ +int EQLMS(_reset)(EQLMS() _q); \ + \ +/* Print equalizer internal state */ \ +int EQLMS(_print)(EQLMS() _q); \ + \ +/* Get equalizer learning rate */ \ +float EQLMS(_get_bw)(EQLMS() _q); \ + \ +/* Set equalizer learning rate */ \ +/* _q : equalizer object */ \ +/* _lambda : learning rate, _lambda > 0 */ \ +int EQLMS(_set_bw)(EQLMS() _q, \ + float _lambda); \ + \ +/* Push sample into equalizer internal buffer */ \ +/* _q : equalizer object */ \ +/* _x : input sample */ \ +int EQLMS(_push)(EQLMS() _q, \ + T _x); \ + \ +/* Push block of samples into internal buffer of equalizer object */ \ +/* _q : equalizer object */ \ +/* _x : input sample array, [size: _n x 1] */ \ +/* _n : input sample array length */ \ +int EQLMS(_push_block)(EQLMS() _q, \ + T * _x, \ + unsigned int _n); \ + \ +/* Execute internal dot product and return result */ \ +/* _q : equalizer object */ \ +/* _y : output sample */ \ +int EQLMS(_execute)(EQLMS() _q, \ + T * _y); \ + \ +/* Execute equalizer with block of samples using constant */ \ +/* modulus algorithm, operating on a decimation rate of _k */ \ +/* samples. */ \ +/* _q : equalizer object */ \ +/* _k : down-sampling rate */ \ +/* _x : input sample array [size: _n x 1] */ \ +/* _n : input sample array length */ \ +/* _y : output sample array [size: _n x 1] */ \ +int EQLMS(_execute_block)(EQLMS() _q, \ + unsigned int _k, \ + T * _x, \ + unsigned int _n, \ + T * _y); \ + \ +/* Step through one cycle of equalizer training */ \ +/* _q : equalizer object */ \ +/* _d : desired output */ \ +/* _d_hat : actual output */ \ +int EQLMS(_step)(EQLMS() _q, \ + T _d, \ + T _d_hat); \ + \ +/* Step through one cycle of equalizer training (blind) */ \ +/* _q : equalizer object */ \ +/* _d_hat : actual output */ \ +int EQLMS(_step_blind)(EQLMS() _q, \ + T _d_hat); \ + \ +/* Get equalizer's internal coefficients */ \ +/* _q : equalizer object */ \ +/* _w : weights, [size: _p x 1] */ \ +int EQLMS(_get_weights)(EQLMS() _q, \ + T * _w); \ + \ +/* Train equalizer object on group of samples */ \ +/* _q : equalizer object */ \ +/* _w : input/output weights, [size: _p x 1] */ \ +/* _x : received sample vector,[size: _n x 1] */ \ +/* _d : desired output vector, [size: _n x 1] */ \ +/* _n : input, output vector length */ \ +int EQLMS(_train)(EQLMS() _q, \ + T * _w, \ + T * _x, \ + T * _d, \ + unsigned int _n); \ + +LIQUID_EQLMS_DEFINE_API(LIQUID_EQLMS_MANGLE_RRRF, float) +LIQUID_EQLMS_DEFINE_API(LIQUID_EQLMS_MANGLE_CCCF, liquid_float_complex) + + +// recursive least-squares (RLS) +#define LIQUID_EQRLS_MANGLE_RRRF(name) LIQUID_CONCAT(eqrls_rrrf,name) +#define LIQUID_EQRLS_MANGLE_CCCF(name) LIQUID_CONCAT(eqrls_cccf,name) + +// large macro +// EQRLS : name-mangling macro +// T : data type +#define LIQUID_EQRLS_DEFINE_API(EQRLS,T) \ + \ +/* Recursive least mean-squares equalization object */ \ +typedef struct EQRLS(_s) * EQRLS(); \ + \ +/* Create RLS EQ initialized with external coefficients */ \ +/* _h : filter coefficients; set to NULL for {1,0,0...},[size: _n x 1] */ \ +/* _n : filter length */ \ +EQRLS() EQRLS(_create)(T * _h, \ + unsigned int _n); \ + \ +/* Re-create EQ initialized with external coefficients */ \ +/* _q : equalizer object */ \ +/* _h : filter coefficients (NULL for {1,0,0...}), [size: _n x 1] */ \ +/* _n : filter length */ \ +EQRLS() EQRLS(_recreate)(EQRLS() _q, \ + T * _h, \ + unsigned int _n); \ + \ +/* Destroy equalizer object, freeing all internal memory */ \ +int EQRLS(_destroy)(EQRLS() _q); \ + \ +/* Reset equalizer object, clearing internal state */ \ +int EQRLS(_reset)(EQRLS() _q); \ + \ +/* Print equalizer internal state */ \ +int EQRLS(_print)(EQRLS() _q); \ + \ +/* Get equalizer learning rate */ \ +float EQRLS(_get_bw)(EQRLS() _q); \ + \ +/* Set equalizer learning rate */ \ +/* _q : equalizer object */ \ +/* _mu : learning rate, _mu > 0 */ \ +int EQRLS(_set_bw)(EQRLS() _q, \ + float _mu); \ + \ +/* Push sample into equalizer internal buffer */ \ +/* _q : equalizer object */ \ +/* _x : input sample */ \ +int EQRLS(_push)(EQRLS() _q, T _x); \ + \ +/* Execute internal dot product and return result */ \ +/* _q : equalizer object */ \ +/* _y : output sample */ \ +int EQRLS(_execute)(EQRLS() _q, T * _y); \ + \ +/* Step through one cycle of equalizer training */ \ +/* _q : equalizer object */ \ +/* _d : desired output */ \ +/* _d_hat : actual output */ \ +int EQRLS(_step)(EQRLS() _q, T _d, T _d_hat); \ + \ +/* Get equalizer's internal coefficients */ \ +/* _q : equalizer object */ \ +/* _w : weights, [size: _p x 1] */ \ +int EQRLS(_get_weights)(EQRLS() _q, \ + T * _w); \ + \ +/* Train equalizer object on group of samples */ \ +/* _q : equalizer object */ \ +/* _w : input/output weights, [size: _p x 1] */ \ +/* _x : received sample vector,[size: _n x 1] */ \ +/* _d : desired output vector, [size: _n x 1] */ \ +/* _n : input, output vector length */ \ +int EQRLS(_train)(EQRLS() _q, \ + T * _w, \ + T * _x, \ + T * _d, \ + unsigned int _n); \ + +LIQUID_EQRLS_DEFINE_API(LIQUID_EQRLS_MANGLE_RRRF, float) +LIQUID_EQRLS_DEFINE_API(LIQUID_EQRLS_MANGLE_CCCF, liquid_float_complex) + + + + +// +// MODULE : fec (forward error-correction) +// + +// soft bit values +#define LIQUID_SOFTBIT_0 (0) +#define LIQUID_SOFTBIT_1 (255) +#define LIQUID_SOFTBIT_ERASURE (127) + +// available CRC schemes +#define LIQUID_CRC_NUM_SCHEMES 7 +typedef enum { + LIQUID_CRC_UNKNOWN=0, // unknown/unavailable CRC scheme + LIQUID_CRC_NONE, // no error-detection + LIQUID_CRC_CHECKSUM, // 8-bit checksum + LIQUID_CRC_8, // 8-bit CRC + LIQUID_CRC_16, // 16-bit CRC + LIQUID_CRC_24, // 24-bit CRC + LIQUID_CRC_32 // 32-bit CRC +} crc_scheme; + +// pretty names for crc schemes +extern const char * crc_scheme_str[LIQUID_CRC_NUM_SCHEMES][2]; + +// Print compact list of existing and available CRC schemes +void liquid_print_crc_schemes(); + +// returns crc_scheme based on input string +crc_scheme liquid_getopt_str2crc(const char * _str); + +// get length of CRC (bytes) +unsigned int crc_get_length(crc_scheme _scheme); + +// generate error-detection key +// _scheme : error-detection scheme +// _msg : input data message, [size: _n x 1] +// _n : input data message size +unsigned int crc_generate_key(crc_scheme _scheme, + unsigned char * _msg, + unsigned int _n); + +// generate error-detection key and append to end of message +// _scheme : error-detection scheme (resulting in 'p' bytes) +// _msg : input data message, [size: _n+p x 1] +// _n : input data message size (excluding key at end) +int crc_append_key(crc_scheme _scheme, + unsigned char * _msg, + unsigned int _n); + +// validate message using error-detection key +// _scheme : error-detection scheme +// _msg : input data message, [size: _n x 1] +// _n : input data message size +// _key : error-detection key +int crc_validate_message(crc_scheme _scheme, + unsigned char * _msg, + unsigned int _n, + unsigned int _key); + +// check message with key appended to end of array +// _scheme : error-detection scheme (resulting in 'p' bytes) +// _msg : input data message, [size: _n+p x 1] +// _n : input data message size (excluding key at end) +int crc_check_key(crc_scheme _scheme, + unsigned char * _msg, + unsigned int _n); + +// get size of key (bytes) +unsigned int crc_sizeof_key(crc_scheme _scheme); + + +// available FEC schemes +#define LIQUID_FEC_NUM_SCHEMES 28 +typedef enum { + LIQUID_FEC_UNKNOWN=0, // unknown/unsupported scheme + LIQUID_FEC_NONE, // no error-correction + LIQUID_FEC_REP3, // simple repeat code, r1/3 + LIQUID_FEC_REP5, // simple repeat code, r1/5 + LIQUID_FEC_HAMMING74, // Hamming (7,4) block code, r1/2 (really 4/7) + LIQUID_FEC_HAMMING84, // Hamming (7,4) with extra parity bit, r1/2 + LIQUID_FEC_HAMMING128, // Hamming (12,8) block code, r2/3 + + LIQUID_FEC_GOLAY2412, // Golay (24,12) block code, r1/2 + LIQUID_FEC_SECDED2216, // SEC-DED (22,16) block code, r8/11 + LIQUID_FEC_SECDED3932, // SEC-DED (39,32) block code + LIQUID_FEC_SECDED7264, // SEC-DED (72,64) block code, r8/9 + + // codecs not defined internally (see http://www.ka9q.net/code/fec/) + LIQUID_FEC_CONV_V27, // r1/2, K=7, dfree=10 + LIQUID_FEC_CONV_V29, // r1/2, K=9, dfree=12 + LIQUID_FEC_CONV_V39, // r1/3, K=9, dfree=18 + LIQUID_FEC_CONV_V615, // r1/6, K=15, dfree<=57 (Heller 1968) + + // punctured (perforated) codes + LIQUID_FEC_CONV_V27P23, // r2/3, K=7, dfree=6 + LIQUID_FEC_CONV_V27P34, // r3/4, K=7, dfree=5 + LIQUID_FEC_CONV_V27P45, // r4/5, K=7, dfree=4 + LIQUID_FEC_CONV_V27P56, // r5/6, K=7, dfree=4 + LIQUID_FEC_CONV_V27P67, // r6/7, K=7, dfree=3 + LIQUID_FEC_CONV_V27P78, // r7/8, K=7, dfree=3 + + LIQUID_FEC_CONV_V29P23, // r2/3, K=9, dfree=7 + LIQUID_FEC_CONV_V29P34, // r3/4, K=9, dfree=6 + LIQUID_FEC_CONV_V29P45, // r4/5, K=9, dfree=5 + LIQUID_FEC_CONV_V29P56, // r5/6, K=9, dfree=5 + LIQUID_FEC_CONV_V29P67, // r6/7, K=9, dfree=4 + LIQUID_FEC_CONV_V29P78, // r7/8, K=9, dfree=4 + + // Reed-Solomon codes + LIQUID_FEC_RS_M8 // m=8, n=255, k=223 +} fec_scheme; + +// pretty names for fec schemes +extern const char * fec_scheme_str[LIQUID_FEC_NUM_SCHEMES][2]; + +// Print compact list of existing and available FEC schemes +void liquid_print_fec_schemes(); + +// returns fec_scheme based on input string +fec_scheme liquid_getopt_str2fec(const char * _str); + +// fec object (pointer to fec structure) +typedef struct fec_s * fec; + +// return the encoded message length using a particular error- +// correction scheme (object-independent method) +// _scheme : forward error-correction scheme +// _msg_len : raw, uncoded message length +unsigned int fec_get_enc_msg_length(fec_scheme _scheme, + unsigned int _msg_len); + +// get the theoretical rate of a particular forward error- +// correction scheme (object-independent method) +float fec_get_rate(fec_scheme _scheme); + +// create a fec object of a particular scheme +// _scheme : error-correction scheme +// _opts : (ignored) +fec fec_create(fec_scheme _scheme, + void *_opts); + +// recreate fec object +// _q : old fec object +// _scheme : new error-correction scheme +// _opts : (ignored) +fec fec_recreate(fec _q, + fec_scheme _scheme, + void *_opts); + +// destroy fec object +int fec_destroy(fec _q); + +// print fec object internals +int fec_print(fec _q); + +// encode a block of data using a fec scheme +// _q : fec object +// _dec_msg_len : decoded message length +// _msg_dec : decoded message +// _msg_enc : encoded message +int fec_encode(fec _q, + unsigned int _dec_msg_len, + unsigned char * _msg_dec, + unsigned char * _msg_enc); + +// decode a block of data using a fec scheme +// _q : fec object +// _dec_msg_len : decoded message length +// _msg_enc : encoded message +// _msg_dec : decoded message +int fec_decode(fec _q, + unsigned int _dec_msg_len, + unsigned char * _msg_enc, + unsigned char * _msg_dec); + +// decode a block of data using a fec scheme (soft decision) +// _q : fec object +// _dec_msg_len : decoded message length +// _msg_enc : encoded message (soft bits) +// _msg_dec : decoded message +int fec_decode_soft(fec _q, + unsigned int _dec_msg_len, + unsigned char * _msg_enc, + unsigned char * _msg_dec); + +// +// Packetizer +// + +// computes the number of encoded bytes after packetizing +// +// _n : number of uncoded input bytes +// _crc : error-detecting scheme +// _fec0 : inner forward error-correction code +// _fec1 : outer forward error-correction code +unsigned int packetizer_compute_enc_msg_len(unsigned int _n, + int _crc, + int _fec0, + int _fec1); + +// computes the number of decoded bytes before packetizing +// +// _k : number of encoded bytes +// _crc : error-detecting scheme +// _fec0 : inner forward error-correction code +// _fec1 : outer forward error-correction code +unsigned int packetizer_compute_dec_msg_len(unsigned int _k, + int _crc, + int _fec0, + int _fec1); + +typedef struct packetizer_s * packetizer; + +// create packetizer object +// +// _n : number of uncoded input bytes +// _crc : error-detecting scheme +// _fec0 : inner forward error-correction code +// _fec1 : outer forward error-correction code +packetizer packetizer_create(unsigned int _dec_msg_len, + int _crc, + int _fec0, + int _fec1); + +// re-create packetizer object +// +// _p : initialz packetizer object +// _n : number of uncoded input bytes +// _crc : error-detecting scheme +// _fec0 : inner forward error-correction code +// _fec1 : outer forward error-correction code +packetizer packetizer_recreate(packetizer _p, + unsigned int _dec_msg_len, + int _crc, + int _fec0, + int _fec1); + +// destroy packetizer object +void packetizer_destroy(packetizer _p); + +// print packetizer object internals +void packetizer_print(packetizer _p); + +// access methods +unsigned int packetizer_get_dec_msg_len(packetizer _p); +unsigned int packetizer_get_enc_msg_len(packetizer _p); +crc_scheme packetizer_get_crc (packetizer _p); +fec_scheme packetizer_get_fec0 (packetizer _p); +fec_scheme packetizer_get_fec1 (packetizer _p); + + +// Execute the packetizer on an input message +// +// _p : packetizer object +// _msg : input message (uncoded bytes) +// _pkt : encoded output message +void packetizer_encode(packetizer _p, + const unsigned char * _msg, + unsigned char * _pkt); + +// Execute the packetizer to decode an input message, return validity +// check of resulting data +// +// _p : packetizer object +// _pkt : input message (coded bytes) +// _msg : decoded output message +int packetizer_decode(packetizer _p, + const unsigned char * _pkt, + unsigned char * _msg); + +// Execute the packetizer to decode an input message, return validity +// check of resulting data +// +// _p : packetizer object +// _pkt : input message (coded soft bits) +// _msg : decoded output message +int packetizer_decode_soft(packetizer _p, + const unsigned char * _pkt, + unsigned char * _msg); + + +// +// interleaver +// +typedef struct interleaver_s * interleaver; + +// create interleaver +// _n : number of bytes +interleaver interleaver_create(unsigned int _n); + +// destroy interleaver object +void interleaver_destroy(interleaver _q); + +// print interleaver object internals +void interleaver_print(interleaver _q); + +// set depth (number of internal iterations) +// _q : interleaver object +// _depth : depth +void interleaver_set_depth(interleaver _q, + unsigned int _depth); + +// execute forward interleaver (encoder) +// _q : interleaver object +// _msg_dec : decoded (un-interleaved) message +// _msg_enc : encoded (interleaved) message +void interleaver_encode(interleaver _q, + unsigned char * _msg_dec, + unsigned char * _msg_enc); + +// execute forward interleaver (encoder) on soft bits +// _q : interleaver object +// _msg_dec : decoded (un-interleaved) message +// _msg_enc : encoded (interleaved) message +void interleaver_encode_soft(interleaver _q, + unsigned char * _msg_dec, + unsigned char * _msg_enc); + +// execute reverse interleaver (decoder) +// _q : interleaver object +// _msg_enc : encoded (interleaved) message +// _msg_dec : decoded (un-interleaved) message +void interleaver_decode(interleaver _q, + unsigned char * _msg_enc, + unsigned char * _msg_dec); + +// execute reverse interleaver (decoder) on soft bits +// _q : interleaver object +// _msg_enc : encoded (interleaved) message +// _msg_dec : decoded (un-interleaved) message +void interleaver_decode_soft(interleaver _q, + unsigned char * _msg_enc, + unsigned char * _msg_dec); + + + +// +// MODULE : fft (fast Fourier transform) +// + +// type of transform +typedef enum { + LIQUID_FFT_UNKNOWN = 0, // unknown transform type + + // regular complex one-dimensional transforms + LIQUID_FFT_FORWARD = +1, // complex one-dimensional FFT + LIQUID_FFT_BACKWARD = -1, // complex one-dimensional inverse FFT + + // discrete cosine transforms + LIQUID_FFT_REDFT00 = 10, // real one-dimensional DCT-I + LIQUID_FFT_REDFT10 = 11, // real one-dimensional DCT-II + LIQUID_FFT_REDFT01 = 12, // real one-dimensional DCT-III + LIQUID_FFT_REDFT11 = 13, // real one-dimensional DCT-IV + + // discrete sine transforms + LIQUID_FFT_RODFT00 = 20, // real one-dimensional DST-I + LIQUID_FFT_RODFT10 = 21, // real one-dimensional DST-II + LIQUID_FFT_RODFT01 = 22, // real one-dimensional DST-III + LIQUID_FFT_RODFT11 = 23, // real one-dimensional DST-IV + + // modified discrete cosine transform + LIQUID_FFT_MDCT = 30, // MDCT + LIQUID_FFT_IMDCT = 31, // IMDCT +} liquid_fft_type; + +#define LIQUID_FFT_MANGLE_FLOAT(name) LIQUID_CONCAT(fft,name) + +// Macro : FFT +// FFT : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +#define LIQUID_FFT_DEFINE_API(FFT,T,TC) \ + \ +/* Fast Fourier Transform (FFT) and inverse (plan) object */ \ +typedef struct FFT(plan_s) * FFT(plan); \ + \ +/* Create regular complex one-dimensional transform */ \ +/* _n : transform size */ \ +/* _x : pointer to input array [size: _n x 1] */ \ +/* _y : pointer to output array [size: _n x 1] */ \ +/* _dir : direction (e.g. LIQUID_FFT_FORWARD) */ \ +/* _flags : options, optimization */ \ +FFT(plan) FFT(_create_plan)(unsigned int _n, \ + TC * _x, \ + TC * _y, \ + int _dir, \ + int _flags); \ + \ +/* Create real-to-real one-dimensional transform */ \ +/* _n : transform size */ \ +/* _x : pointer to input array [size: _n x 1] */ \ +/* _y : pointer to output array [size: _n x 1] */ \ +/* _type : transform type (e.g. LIQUID_FFT_REDFT00) */ \ +/* _flags : options, optimization */ \ +FFT(plan) FFT(_create_plan_r2r_1d)(unsigned int _n, \ + T * _x, \ + T * _y, \ + int _type, \ + int _flags); \ + \ +/* Destroy transform and free all internally-allocated memory */ \ +int FFT(_destroy_plan)(FFT(plan) _p); \ + \ +/* Print transform plan and internal strategy to stdout. This includes */ \ +/* information on the strategy for computing large transforms with many */ \ +/* prime factors or with large prime factors. */ \ +int FFT(_print_plan)(FFT(plan) _p); \ + \ +/* Run the transform */ \ +int FFT(_execute)(FFT(plan) _p); \ + \ +/* Perform n-point FFT allocating plan internally */ \ +/* _nfft : fft size */ \ +/* _x : input array [size: _nfft x 1] */ \ +/* _y : output array [size: _nfft x 1] */ \ +/* _dir : fft direction: LIQUID_FFT_{FORWARD,BACKWARD} */ \ +/* _flags : fft flags */ \ +int FFT(_run)(unsigned int _n, \ + TC * _x, \ + TC * _y, \ + int _dir, \ + int _flags); \ + \ +/* Perform n-point real one-dimensional FFT allocating plan internally */ \ +/* _nfft : fft size */ \ +/* _x : input array [size: _nfft x 1] */ \ +/* _y : output array [size: _nfft x 1] */ \ +/* _type : fft type, e.g. LIQUID_FFT_REDFT10 */ \ +/* _flags : fft flags */ \ +int FFT(_r2r_1d_run)(unsigned int _n, \ + T * _x, \ + T * _y, \ + int _type, \ + int _flags); \ + \ +/* Perform _n-point fft shift */ \ +/* _x : input array [size: _n x 1] */ \ +/* _n : input array size */ \ +int FFT(_shift)(TC * _x, \ + unsigned int _n); \ + + +LIQUID_FFT_DEFINE_API(LIQUID_FFT_MANGLE_FLOAT,float,liquid_float_complex) + +// antiquated fft methods +// FFT(plan) FFT(_create_plan_mdct)(unsigned int _n, +// T * _x, +// T * _y, +// int _kind, +// int _flags); + + +// +// spectral periodogram +// + +#define LIQUID_SPGRAM_MANGLE_CFLOAT(name) LIQUID_CONCAT(spgramcf,name) +#define LIQUID_SPGRAM_MANGLE_FLOAT(name) LIQUID_CONCAT(spgramf, name) + +#define LIQUID_SPGRAM_PSD_MIN (1e-12) + +// Macro : SPGRAM +// SPGRAM : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +// TI : primitive data type (input) +#define LIQUID_SPGRAM_DEFINE_API(SPGRAM,T,TC,TI) \ + \ +/* Spectral periodogram object for computing power spectral density */ \ +/* estimates of various signals */ \ +typedef struct SPGRAM(_s) * SPGRAM(); \ + \ +/* Create spgram object, fully defined */ \ +/* _nfft : transform (FFT) size, _nfft >= 2 */ \ +/* _wtype : window type, e.g. LIQUID_WINDOW_HAMMING */ \ +/* _window_len : window length, 1 <= _window_len <= _nfft */ \ +/* _delay : delay between transforms, _delay > 0 */ \ +SPGRAM() SPGRAM(_create)(unsigned int _nfft, \ + int _wtype, \ + unsigned int _window_len, \ + unsigned int _delay); \ + \ +/* Create default spgram object of a particular transform size using */ \ +/* the Kaiser-Bessel window (LIQUID_WINDOW_KAISER), a window length */ \ +/* equal to _nfft/2, and a delay of _nfft/4 */ \ +/* _nfft : FFT size, _nfft >= 2 */ \ +SPGRAM() SPGRAM(_create_default)(unsigned int _nfft); \ + \ +/* Destroy spgram object, freeing all internally-allocated memory */ \ +int SPGRAM(_destroy)(SPGRAM() _q); \ + \ +/* Clears the internal state of the object, but not the internal buffer */ \ +int SPGRAM(_clear)(SPGRAM() _q); \ + \ +/* Reset the object to its original state completely. This effectively */ \ +/* executes the clear() method and then resets the internal buffer */ \ +int SPGRAM(_reset)(SPGRAM() _q); \ + \ +/* Print internal state of the object to stdout */ \ +int SPGRAM(_print)(SPGRAM() _q); \ + \ +/* Set the filter bandwidth for accumulating independent transform */ \ +/* squared magnitude outputs. */ \ +/* This is used to compute a running time-average power spectral */ \ +/* density output. */ \ +/* The value of _alpha determines how the power spectral estimate is */ \ +/* accumulated across transforms and can range from 0 to 1 with a */ \ +/* special case of -1 to accumulate infinitely. */ \ +/* Setting _alpha to 0 minimizes the bandwidth and the PSD estimate */ \ +/* will never update. */ \ +/* Setting _alpha to 1 forces the object to always use the most recent */ \ +/* spectral estimate. */ \ +/* Setting _alpha to -1 is a special case to enable infinite spectral */ \ +/* accumulation. */ \ +/* _q : spectral periodogram object */ \ +/* _alpha : forgetting factor, set to -1 for infinite, 0<=_alpha<=1 */ \ +int SPGRAM(_set_alpha)(SPGRAM() _q, \ + float _alpha); \ + \ +/* Get the filter bandwidth for accumulating independent transform */ \ +/* squared magnitude outputs. */ \ +float SPGRAM(_get_alpha)(SPGRAM() _q); \ + \ +/* Set the center frequency of the received signal. */ \ +/* This is for display purposes only when generating the output image. */ \ +/* _q : spectral periodogram object */ \ +/* _freq : center frequency [Hz] */ \ +int SPGRAM(_set_freq)(SPGRAM() _q, \ + float _freq); \ + \ +/* Set the sample rate (frequency) of the received signal. */ \ +/* This is for display purposes only when generating the output image. */ \ +/* _q : spectral periodogram object */ \ +/* _rate : sample rate [Hz] */ \ +int SPGRAM(_set_rate)(SPGRAM() _q, \ + float _rate); \ + \ +/* Get transform (FFT) size */ \ +unsigned int SPGRAM(_get_nfft)(SPGRAM() _q); \ + \ +/* Get window length */ \ +unsigned int SPGRAM(_get_window_len)(SPGRAM() _q); \ + \ +/* Get delay between transforms */ \ +unsigned int SPGRAM(_get_delay)(SPGRAM() _q); \ + \ +/* Get number of samples processed since reset */ \ +unsigned long long int SPGRAM(_get_num_samples)(SPGRAM() _q); \ + \ +/* Get number of samples processed since object was created */ \ +unsigned long long int SPGRAM(_get_num_samples_total)(SPGRAM() _q); \ + \ +/* Get number of transforms processed since reset */ \ +unsigned long long int SPGRAM(_get_num_transforms)(SPGRAM() _q); \ + \ +/* Get number of transforms processed since object was created */ \ +unsigned long long int SPGRAM(_get_num_transforms_total)(SPGRAM() _q); \ + \ +/* Push a single sample into the object, executing internal transform */ \ +/* as necessary. */ \ +/* _q : spgram object */ \ +/* _x : input sample */ \ +int SPGRAM(_push)(SPGRAM() _q, \ + TI _x); \ + \ +/* Write a block of samples to the object, executing internal */ \ +/* transform as necessary. */ \ +/* _q : spgram object */ \ +/* _x : input buffer [size: _n x 1] */ \ +/* _n : input buffer length */ \ +int SPGRAM(_write)(SPGRAM() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Compute spectral periodogram output (fft-shifted values in dB) from */ \ +/* current buffer contents */ \ +/* _q : spgram object */ \ +/* _X : output spectrum (dB), [size: _nfft x 1] */ \ +int SPGRAM(_get_psd)(SPGRAM() _q, \ + T * _X); \ + \ +/* Export stand-alone gnuplot file for plotting output spectrum, */ \ +/* returning 0 on sucess, anything other than 0 for failure */ \ +/* _q : spgram object */ \ +/* _filename : input buffer [size: _n x 1] */ \ +int SPGRAM(_export_gnuplot)(SPGRAM() _q, \ + const char * _filename); \ + \ +/* Estimate spectrum on input signal (create temporary object for */ \ +/* convenience */ \ +/* _nfft : FFT size */ \ +/* _x : input signal [size: _n x 1] */ \ +/* _n : input signal length */ \ +/* _psd : output spectrum, [size: _nfft x 1] */ \ +int SPGRAM(_estimate_psd)(unsigned int _nfft, \ + TI * _x, \ + unsigned int _n, \ + T * _psd); \ + +LIQUID_SPGRAM_DEFINE_API(LIQUID_SPGRAM_MANGLE_CFLOAT, + float, + liquid_float_complex, + liquid_float_complex) + +LIQUID_SPGRAM_DEFINE_API(LIQUID_SPGRAM_MANGLE_FLOAT, + float, + liquid_float_complex, + float) + +// +// asgram : ascii spectral periodogram +// + +#define LIQUID_ASGRAM_MANGLE_CFLOAT(name) LIQUID_CONCAT(asgramcf,name) +#define LIQUID_ASGRAM_MANGLE_FLOAT(name) LIQUID_CONCAT(asgramf, name) + +// Macro : ASGRAM +// ASGRAM : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +// TI : primitive data type (input) +#define LIQUID_ASGRAM_DEFINE_API(ASGRAM,T,TC,TI) \ + \ +/* ASCII spectral periodogram for computing and displaying an estimate */ \ +/* of a signal's power spectrum with ASCII characters */ \ +typedef struct ASGRAM(_s) * ASGRAM(); \ + \ +/* Create asgram object with size _nfft */ \ +/* _nfft : size of FFT taken for each transform (character width) */ \ +ASGRAM() ASGRAM(_create)(unsigned int _nfft); \ + \ +/* Destroy asgram object, freeing all internally-allocated memory */ \ +int ASGRAM(_destroy)(ASGRAM() _q); \ + \ +/* Reset the internal state of the asgram object */ \ +int ASGRAM(_reset)(ASGRAM() _q); \ + \ +/* Set the scale and offset for spectrogram in terms of dB for display */ \ +/* purposes */ \ +/* _q : asgram object */ \ +/* _ref : signal reference level [dB] */ \ +/* _div : signal division [dB] */ \ +int ASGRAM(_set_scale)(ASGRAM() _q, \ + float _ref, \ + float _div); \ + \ +/* Set the display's 10 characters for output string starting from the */ \ +/* weakest and ending with the strongest */ \ +/* _q : asgram object */ \ +/* _ascii : 10-character display, default: " .,-+*&NM#" */ \ +int ASGRAM(_set_display)(ASGRAM() _q, \ + const char * _ascii); \ + \ +/* Push a single sample into the asgram object, executing internal */ \ +/* transform as necessary. */ \ +/* _q : asgram object */ \ +/* _x : input sample */ \ +int ASGRAM(_push)(ASGRAM() _q, \ + TI _x); \ + \ +/* Write a block of samples to the asgram object, executing internal */ \ +/* transforms as necessary. */ \ +/* _q : asgram object */ \ +/* _x : input buffer [size: _n x 1] */ \ +/* _n : input buffer length */ \ +int ASGRAM(_write)(ASGRAM() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Compute spectral periodogram output from current buffer contents */ \ +/* and return the ascii character string to display along with the peak */ \ +/* value and its frequency location */ \ +/* _q : asgram object */ \ +/* _ascii : output ASCII string [size: _nfft x 1] */ \ +/* _peakval : peak power spectral density value [dB] */ \ +/* _peakfreq : peak power spectral density frequency */ \ +int ASGRAM(_execute)(ASGRAM() _q, \ + char * _ascii, \ + float * _peakval, \ + float * _peakfreq); \ + \ +/* Compute spectral periodogram output from current buffer contents and */ \ +/* print standard format to stdout */ \ +int ASGRAM(_print)(ASGRAM() _q); \ + +LIQUID_ASGRAM_DEFINE_API(LIQUID_ASGRAM_MANGLE_CFLOAT, + float, + liquid_float_complex, + liquid_float_complex) + +LIQUID_ASGRAM_DEFINE_API(LIQUID_ASGRAM_MANGLE_FLOAT, + float, + liquid_float_complex, + float) + +// +// spectral periodogram waterfall +// + +#define LIQUID_SPWATERFALL_MANGLE_CFLOAT(name) LIQUID_CONCAT(spwaterfallcf,name) +#define LIQUID_SPWATERFALL_MANGLE_FLOAT(name) LIQUID_CONCAT(spwaterfallf, name) + +// Macro : SPWATERFALL +// SPWATERFALL : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +// TI : primitive data type (input) +#define LIQUID_SPWATERFALL_DEFINE_API(SPWATERFALL,T,TC,TI) \ + \ +/* Spectral periodogram waterfall object for computing time-varying */ \ +/* power spectral density estimates */ \ +typedef struct SPWATERFALL(_s) * SPWATERFALL(); \ + \ +/* Create spwaterfall object, fully defined */ \ +/* _nfft : transform (FFT) size, _nfft >= 2 */ \ +/* _wtype : window type, e.g. LIQUID_WINDOW_HAMMING */ \ +/* _window_len : window length, 1 <= _window_len <= _nfft */ \ +/* _delay : delay between transforms, _delay > 0 */ \ +/* _time : number of aggregated transforms, _time > 0 */ \ +SPWATERFALL() SPWATERFALL(_create)(unsigned int _nfft, \ + int _wtype, \ + unsigned int _window_len, \ + unsigned int _delay, \ + unsigned int _time); \ + \ +/* Create default spwatefall object (Kaiser-Bessel window) */ \ +/* _nfft : transform size, _nfft >= 2 */ \ +/* _time : delay between transforms, _delay > 0 */ \ +SPWATERFALL() SPWATERFALL(_create_default)(unsigned int _nfft, \ + unsigned int _time); \ + \ +/* Destroy spwaterfall object, freeing all internally-allocated memory */ \ +int SPWATERFALL(_destroy)(SPWATERFALL() _q); \ + \ +/* Clears the internal state of the object, but not the internal buffer */ \ +int SPWATERFALL(_clear)(SPWATERFALL() _q); \ + \ +/* Reset the object to its original state completely. This effectively */ \ +/* executes the clear() method and then resets the internal buffer */ \ +int SPWATERFALL(_reset)(SPWATERFALL() _q); \ + \ +/* Print internal state of the object to stdout */ \ +int SPWATERFALL(_print)(SPWATERFALL() _q); \ + \ +/* Get number of samples processed since object was created */ \ +uint64_t SPWATERFALL(_get_num_samples_total)(SPWATERFALL() _q); \ + \ +/* Get FFT size (columns in PSD output) */ \ +unsigned int SPWATERFALL(_get_num_freq)(SPWATERFALL() _q); \ + \ +/* Get number of accumulated FFTs (rows in PSD output) */ \ +unsigned int SPWATERFALL(_get_num_time)(SPWATERFALL() _q); \ + \ +/* Get power spectral density (PSD), size: nfft x time */ \ +const T * SPWATERFALL(_get_psd)(SPWATERFALL() _q); \ + \ +/* Set the center frequency of the received signal. */ \ +/* This is for display purposes only when generating the output image. */ \ +/* _q : spectral periodogram waterfall object */ \ +/* _freq : center frequency [Hz] */ \ +int SPWATERFALL(_set_freq)(SPWATERFALL() _q, \ + float _freq); \ + \ +/* Set the sample rate (frequency) of the received signal. */ \ +/* This is for display purposes only when generating the output image. */ \ +/* _q : spectral periodogram waterfall object */ \ +/* _rate : sample rate [Hz] */ \ +int SPWATERFALL(_set_rate)(SPWATERFALL() _q, \ + float _rate); \ + \ +/* Set the canvas size. */ \ +/* This is for display purposes only when generating the output image. */ \ +/* _q : spectral periodogram waterfall object */ \ +/* _width : image width [pixels] */ \ +/* _height : image height [pixels] */ \ +int SPWATERFALL(_set_dims)(SPWATERFALL() _q, \ + unsigned int _width, \ + unsigned int _height); \ + \ +/* Set commands for executing directly before 'plot' statement. */ \ +/* _q : spectral periodogram waterfall object */ \ +/* _commands : gnuplot commands separated by semicolons */ \ +int SPWATERFALL(_set_commands)(SPWATERFALL() _q, \ + const char * _commands); \ + \ +/* Push a single sample into the object, executing internal transform */ \ +/* as necessary. */ \ +/* _q : spwaterfall object */ \ +/* _x : input sample */ \ +int SPWATERFALL(_push)(SPWATERFALL() _q, \ + TI _x); \ + \ +/* Write a block of samples to the object, executing internal */ \ +/* transform as necessary. */ \ +/* _q : spwaterfall object */ \ +/* _x : input buffer, [size: _n x 1] */ \ +/* _n : input buffer length */ \ +int SPWATERFALL(_write)(SPWATERFALL() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Export set of files for plotting */ \ +/* _q : spwaterfall object */ \ +/* _base : base filename (will export .gnu, .bin, and .png files) */ \ +int SPWATERFALL(_export)(SPWATERFALL() _q, \ + const char * _base); \ + + +LIQUID_SPWATERFALL_DEFINE_API(LIQUID_SPWATERFALL_MANGLE_CFLOAT, + float, + liquid_float_complex, + liquid_float_complex) + +LIQUID_SPWATERFALL_DEFINE_API(LIQUID_SPWATERFALL_MANGLE_FLOAT, + float, + liquid_float_complex, + float) + + +// +// MODULE : filter +// + +// +// firdes: finite impulse response filter design +// + +// prototypes +#define LIQUID_FIRFILT_NUM_TYPES (16) +typedef enum { + LIQUID_FIRFILT_UNKNOWN=0, // unknown filter type + + // Nyquist filter prototypes + LIQUID_FIRFILT_KAISER, // Nyquist Kaiser filter + LIQUID_FIRFILT_PM, // Parks-McClellan filter + LIQUID_FIRFILT_RCOS, // raised-cosine filter + LIQUID_FIRFILT_FEXP, // flipped exponential + LIQUID_FIRFILT_FSECH, // flipped hyperbolic secant + LIQUID_FIRFILT_FARCSECH, // flipped arc-hyperbolic secant + + // root-Nyquist filter prototypes + LIQUID_FIRFILT_ARKAISER, // root-Nyquist Kaiser (approximate optimum) + LIQUID_FIRFILT_RKAISER, // root-Nyquist Kaiser (true optimum) + LIQUID_FIRFILT_RRC, // root raised-cosine + LIQUID_FIRFILT_hM3, // harris-Moerder-3 filter + LIQUID_FIRFILT_GMSKTX, // GMSK transmit filter + LIQUID_FIRFILT_GMSKRX, // GMSK receive filter + LIQUID_FIRFILT_RFEXP, // flipped exponential + LIQUID_FIRFILT_RFSECH, // flipped hyperbolic secant + LIQUID_FIRFILT_RFARCSECH, // flipped arc-hyperbolic secant +} liquid_firfilt_type; + +// Design (root-)Nyquist filter from prototype +// _type : filter type (e.g. LIQUID_FIRFILT_RRC) +// _k : samples/symbol, _k > 1 +// _m : symbol delay, _m > 0 +// _beta : excess bandwidth factor, _beta in [0,1) +// _dt : fractional sample delay, _dt in [-1,1] +// _h : output coefficient buffer (length: 2*_k*_m+1) +void liquid_firdes_prototype(liquid_firfilt_type _type, + unsigned int _k, + unsigned int _m, + float _beta, + float _dt, + float * _h); + +// pretty names for filter design types +extern const char * liquid_firfilt_type_str[LIQUID_FIRFILT_NUM_TYPES][2]; + +// returns filter type based on input string +int liquid_getopt_str2firfilt(const char * _str); + +// estimate required filter length given +// _df : transition bandwidth (0 < _b < 0.5) +// _As : stop-band attenuation [dB], _As > 0 +unsigned int estimate_req_filter_len(float _df, + float _As); + +// estimate filter stop-band attenuation given +// _df : transition bandwidth (0 < _b < 0.5) +// _N : filter length +float estimate_req_filter_As(float _df, + unsigned int _N); + +// estimate filter transition bandwidth given +// _As : stop-band attenuation [dB], _As > 0 +// _N : filter length +float estimate_req_filter_df(float _As, + unsigned int _N); + + +// returns the Kaiser window beta factor give the filter's target +// stop-band attenuation (As) [Vaidyanathan:1993] +// _As : target filter's stop-band attenuation [dB], _As > 0 +float kaiser_beta_As(float _As); + + +// Design FIR filter using Parks-McClellan algorithm + +// band type specifier +typedef enum { + LIQUID_FIRDESPM_BANDPASS=0, // regular band-pass filter + LIQUID_FIRDESPM_DIFFERENTIATOR, // differentiating filter + LIQUID_FIRDESPM_HILBERT // Hilbert transform +} liquid_firdespm_btype; + +// weighting type specifier +typedef enum { + LIQUID_FIRDESPM_FLATWEIGHT=0, // flat weighting + LIQUID_FIRDESPM_EXPWEIGHT, // exponential weighting + LIQUID_FIRDESPM_LINWEIGHT, // linear weighting +} liquid_firdespm_wtype; + +// run filter design (full life cycle of object) +// _h_len : length of filter (number of taps) +// _num_bands : number of frequency bands +// _bands : band edges, f in [0,0.5], [size: _num_bands x 2] +// _des : desired response [size: _num_bands x 1] +// _weights : response weighting [size: _num_bands x 1] +// _wtype : weight types (e.g. LIQUID_FIRDESPM_FLATWEIGHT) [size: _num_bands x 1] +// _btype : band type (e.g. LIQUID_FIRDESPM_BANDPASS) +// _h : output coefficients array [size: _h_len x 1] +int firdespm_run(unsigned int _h_len, + unsigned int _num_bands, + float * _bands, + float * _des, + float * _weights, + liquid_firdespm_wtype * _wtype, + liquid_firdespm_btype _btype, + float * _h); + +// run filter design for basic low-pass filter +// _n : filter length, _n > 0 +// _fc : cutoff frequency, 0 < _fc < 0.5 +// _As : stop-band attenuation [dB], _As > 0 +// _mu : fractional sample offset, -0.5 < _mu < 0.5 [ignored] +// _h : output coefficient buffer, [size: _n x 1] +int firdespm_lowpass(unsigned int _n, + float _fc, + float _As, + float _mu, + float * _h); + +// firdespm response callback function +// _frequency : normalized frequency +// _userdata : pointer to userdata +// _desired : (return) desired response +// _weight : (return) weight +typedef int (*firdespm_callback)(double _frequency, + void * _userdata, + double * _desired, + double * _weight); + +// structured object +typedef struct firdespm_s * firdespm; + +// create firdespm object +// _h_len : length of filter (number of taps) +// _num_bands : number of frequency bands +// _bands : band edges, f in [0,0.5], [size: _num_bands x 2] +// _des : desired response [size: _num_bands x 1] +// _weights : response weighting [size: _num_bands x 1] +// _wtype : weight types (e.g. LIQUID_FIRDESPM_FLATWEIGHT) [size: _num_bands x 1] +// _btype : band type (e.g. LIQUID_FIRDESPM_BANDPASS) +firdespm firdespm_create(unsigned int _h_len, + unsigned int _num_bands, + float * _bands, + float * _des, + float * _weights, + liquid_firdespm_wtype * _wtype, + liquid_firdespm_btype _btype); + +// create firdespm object with user-defined callback +// _h_len : length of filter (number of taps) +// _num_bands : number of frequency bands +// _bands : band edges, f in [0,0.5], [size: _num_bands x 2] +// _btype : band type (e.g. LIQUID_FIRDESPM_BANDPASS) +// _callback : user-defined callback for specifying desired response & weights +// _userdata : user-defined data structure for callback function +firdespm firdespm_create_callback(unsigned int _h_len, + unsigned int _num_bands, + float * _bands, + liquid_firdespm_btype _btype, + firdespm_callback _callback, + void * _userdata); + +// destroy firdespm object +int firdespm_destroy(firdespm _q); + +// print firdespm object internals +int firdespm_print(firdespm _q); + +// execute filter design, storing result in _h +int firdespm_execute(firdespm _q, float * _h); + + +// Design FIR using kaiser window +// _n : filter length, _n > 0 +// _fc : cutoff frequency, 0 < _fc < 0.5 +// _As : stop-band attenuation [dB], _As > 0 +// _mu : fractional sample offset, -0.5 < _mu < 0.5 +// _h : output coefficient buffer, [size: _n x 1] +void liquid_firdes_kaiser(unsigned int _n, + float _fc, + float _As, + float _mu, + float *_h); + +// Design finite impulse response notch filter +// _m : filter semi-length, m in [1,1000] +// _f0 : filter notch frequency (normalized), -0.5 <= _fc <= 0.5 +// _As : stop-band attenuation [dB], _As > 0 +// _h : output coefficient buffer, [size: 2*_m+1 x 1] +void liquid_firdes_notch(unsigned int _m, + float _f0, + float _As, + float * _h); + +// Design FIR doppler filter +// _n : filter length +// _fd : normalized doppler frequency (0 < _fd < 0.5) +// _K : Rice fading factor (K >= 0) +// _theta : LoS component angle of arrival +// _h : output coefficient buffer +void liquid_firdes_doppler(unsigned int _n, + float _fd, + float _K, + float _theta, + float * _h); + + +// Design Nyquist raised-cosine filter +// _k : samples/symbol +// _m : symbol delay +// _beta : rolloff factor (0 < beta <= 1) +// _dt : fractional sample delay +// _h : output coefficient buffer (length: 2*k*m+1) +void liquid_firdes_rcos(unsigned int _k, + unsigned int _m, + float _beta, + float _dt, + float * _h); + +// Design root-Nyquist raised-cosine filter +void liquid_firdes_rrcos(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design root-Nyquist Kaiser filter +void liquid_firdes_rkaiser(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design (approximate) root-Nyquist Kaiser filter +void liquid_firdes_arkaiser(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design root-Nyquist harris-Moerder filter +void liquid_firdes_hM3(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design GMSK transmit and receive filters +void liquid_firdes_gmsktx(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); +void liquid_firdes_gmskrx(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design flipped exponential Nyquist/root-Nyquist filters +void liquid_firdes_fexp( unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); +void liquid_firdes_rfexp(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design flipped hyperbolic secand Nyquist/root-Nyquist filters +void liquid_firdes_fsech( unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); +void liquid_firdes_rfsech(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Design flipped arc-hyperbolic secand Nyquist/root-Nyquist filters +void liquid_firdes_farcsech( unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); +void liquid_firdes_rfarcsech(unsigned int _k, unsigned int _m, float _beta, float _dt, float * _h); + +// Compute group delay for an FIR filter +// _h : filter coefficients array +// _n : filter length +// _fc : frequency at which delay is evaluated (-0.5 < _fc < 0.5) +float fir_group_delay(float * _h, + unsigned int _n, + float _fc); + +// Compute group delay for an IIR filter +// _b : filter numerator coefficients +// _nb : filter numerator length +// _a : filter denominator coefficients +// _na : filter denominator length +// _fc : frequency at which delay is evaluated (-0.5 < _fc < 0.5) +float iir_group_delay(float * _b, + unsigned int _nb, + float * _a, + unsigned int _na, + float _fc); + + +// liquid_filter_autocorr() +// +// Compute auto-correlation of filter at a specific lag. +// +// _h : filter coefficients [size: _h_len x 1] +// _h_len : filter length +// _lag : auto-correlation lag (samples) +float liquid_filter_autocorr(float * _h, + unsigned int _h_len, + int _lag); + +// liquid_filter_crosscorr() +// +// Compute cross-correlation of two filters at a specific lag. +// +// _h : filter coefficients [size: _h_len] +// _h_len : filter length +// _g : filter coefficients [size: _g_len] +// _g_len : filter length +// _lag : cross-correlation lag (samples) +float liquid_filter_crosscorr(float * _h, + unsigned int _h_len, + float * _g, + unsigned int _g_len, + int _lag); + +// liquid_filter_isi() +// +// Compute inter-symbol interference (ISI)--both RMS and +// maximum--for the filter _h. +// +// _h : filter coefficients [size: 2*_k*_m+1 x 1] +// _k : filter over-sampling rate (samples/symbol) +// _m : filter delay (symbols) +// _rms : output root mean-squared ISI +// _max : maximum ISI +void liquid_filter_isi(float * _h, + unsigned int _k, + unsigned int _m, + float * _rms, + float * _max); + +// Compute relative out-of-band energy +// +// _h : filter coefficients [size: _h_len x 1] +// _h_len : filter length +// _fc : analysis cut-off frequency +// _nfft : fft size +float liquid_filter_energy(float * _h, + unsigned int _h_len, + float _fc, + unsigned int _nfft); + + +// +// IIR filter design +// + +// IIR filter design filter type +typedef enum { + LIQUID_IIRDES_BUTTER=0, + LIQUID_IIRDES_CHEBY1, + LIQUID_IIRDES_CHEBY2, + LIQUID_IIRDES_ELLIP, + LIQUID_IIRDES_BESSEL +} liquid_iirdes_filtertype; + +// IIR filter design band type +typedef enum { + LIQUID_IIRDES_LOWPASS=0, + LIQUID_IIRDES_HIGHPASS, + LIQUID_IIRDES_BANDPASS, + LIQUID_IIRDES_BANDSTOP +} liquid_iirdes_bandtype; + +// IIR filter design coefficients format +typedef enum { + LIQUID_IIRDES_SOS=0, + LIQUID_IIRDES_TF +} liquid_iirdes_format; + +// IIR filter design template +// _ftype : filter type (e.g. LIQUID_IIRDES_BUTTER) +// _btype : band type (e.g. LIQUID_IIRDES_BANDPASS) +// _format : coefficients format (e.g. LIQUID_IIRDES_SOS) +// _n : filter order +// _fc : low-pass prototype cut-off frequency +// _f0 : center frequency (band-pass, band-stop) +// _Ap : pass-band ripple in dB +// _As : stop-band ripple in dB +// _B : numerator +// _A : denominator +void liquid_iirdes(liquid_iirdes_filtertype _ftype, + liquid_iirdes_bandtype _btype, + liquid_iirdes_format _format, + unsigned int _n, + float _fc, + float _f0, + float _Ap, + float _As, + float * _B, + float * _A); + +// compute analog zeros, poles, gain for specific filter types +void butter_azpkf(unsigned int _n, + liquid_float_complex * _za, + liquid_float_complex * _pa, + liquid_float_complex * _ka); +void cheby1_azpkf(unsigned int _n, + float _ep, + liquid_float_complex * _z, + liquid_float_complex * _p, + liquid_float_complex * _k); +void cheby2_azpkf(unsigned int _n, + float _es, + liquid_float_complex * _z, + liquid_float_complex * _p, + liquid_float_complex * _k); +void ellip_azpkf(unsigned int _n, + float _ep, + float _es, + liquid_float_complex * _z, + liquid_float_complex * _p, + liquid_float_complex * _k); +void bessel_azpkf(unsigned int _n, + liquid_float_complex * _z, + liquid_float_complex * _p, + liquid_float_complex * _k); + +// compute frequency pre-warping factor +float iirdes_freqprewarp(liquid_iirdes_bandtype _btype, + float _fc, + float _f0); + +// convert analog z/p/k form to discrete z/p/k form (bilinear z-transform) +// _za : analog zeros [length: _nza] +// _nza : number of analog zeros +// _pa : analog poles [length: _npa] +// _npa : number of analog poles +// _m : frequency pre-warping factor +// _zd : output digital zeros [length: _npa] +// _pd : output digital poles [length: _npa] +// _kd : output digital gain (should actually be real-valued) +void bilinear_zpkf(liquid_float_complex * _za, + unsigned int _nza, + liquid_float_complex * _pa, + unsigned int _npa, + liquid_float_complex _ka, + float _m, + liquid_float_complex * _zd, + liquid_float_complex * _pd, + liquid_float_complex * _kd); + +// digital z/p/k low-pass to high-pass +// _zd : digital zeros (low-pass prototype), [length: _n] +// _pd : digital poles (low-pass prototype), [length: _n] +// _n : low-pass filter order +// _zdt : output digital zeros transformed [length: _n] +// _pdt : output digital poles transformed [length: _n] +void iirdes_dzpk_lp2hp(liquid_float_complex * _zd, + liquid_float_complex * _pd, + unsigned int _n, + liquid_float_complex * _zdt, + liquid_float_complex * _pdt); + +// digital z/p/k low-pass to band-pass +// _zd : digital zeros (low-pass prototype), [length: _n] +// _pd : digital poles (low-pass prototype), [length: _n] +// _n : low-pass filter order +// _f0 : center frequency +// _zdt : output digital zeros transformed [length: 2*_n] +// _pdt : output digital poles transformed [length: 2*_n] +void iirdes_dzpk_lp2bp(liquid_float_complex * _zd, + liquid_float_complex * _pd, + unsigned int _n, + float _f0, + liquid_float_complex * _zdt, + liquid_float_complex * _pdt); + +// convert discrete z/p/k form to transfer function +// _zd : digital zeros [length: _n] +// _pd : digital poles [length: _n] +// _n : filter order +// _kd : digital gain +// _b : output numerator [length: _n+1] +// _a : output denominator [length: _n+1] +void iirdes_dzpk2tff(liquid_float_complex * _zd, + liquid_float_complex * _pd, + unsigned int _n, + liquid_float_complex _kd, + float * _b, + float * _a); + +// convert discrete z/p/k form to second-order sections +// _zd : digital zeros [length: _n] +// _pd : digital poles [length: _n] +// _n : filter order +// _kd : digital gain +// _B : output numerator [size: 3 x L+r] +// _A : output denominator [size: 3 x L+r] +// where r = _n%2, L = (_n-r)/2 +void iirdes_dzpk2sosf(liquid_float_complex * _zd, + liquid_float_complex * _pd, + unsigned int _n, + liquid_float_complex _kd, + float * _B, + float * _A); + +// additional IIR filter design templates + +// design 2nd-order IIR filter (active lag) +// 1 + t2 * s +// F(s) = ------------ +// 1 + t1 * s +// +// _w : filter bandwidth +// _zeta : damping factor (1/sqrt(2) suggested) +// _K : loop gain (1000 suggested) +// _b : output feed-forward coefficients [size: 3 x 1] +// _a : output feed-back coefficients [size: 3 x 1] +void iirdes_pll_active_lag(float _w, + float _zeta, + float _K, + float * _b, + float * _a); + +// design 2nd-order IIR filter (active PI) +// 1 + t2 * s +// F(s) = ------------ +// t1 * s +// +// _w : filter bandwidth +// _zeta : damping factor (1/sqrt(2) suggested) +// _K : loop gain (1000 suggested) +// _b : output feed-forward coefficients [size: 3 x 1] +// _a : output feed-back coefficients [size: 3 x 1] +void iirdes_pll_active_PI(float _w, + float _zeta, + float _K, + float * _b, + float * _a); + +// checks stability of iir filter +// _b : feed-forward coefficients [size: _n x 1] +// _a : feed-back coefficients [size: _n x 1] +// _n : number of coefficients +int iirdes_isstable(float * _b, + float * _a, + unsigned int _n); + +// +// linear prediction +// + +// compute the linear prediction coefficients for an input signal _x +// _x : input signal [size: _n x 1] +// _n : input signal length +// _p : prediction filter order +// _a : prediction filter [size: _p+1 x 1] +// _e : prediction error variance [size: _p+1 x 1] +void liquid_lpc(float * _x, + unsigned int _n, + unsigned int _p, + float * _a, + float * _g); + +// solve the Yule-Walker equations using Levinson-Durbin recursion +// for _symmetric_ autocorrelation +// _r : autocorrelation array [size: _p+1 x 1] +// _p : filter order +// _a : output coefficients [size: _p+1 x 1] +// _e : error variance [size: _p+1 x 1] +// +// NOTES: +// By definition _a[0] = 1.0 +void liquid_levinson(float * _r, + unsigned int _p, + float * _a, + float * _e); + +// +// auto-correlator (delay cross-correlation) +// + +#define LIQUID_AUTOCORR_MANGLE_CCCF(name) LIQUID_CONCAT(autocorr_cccf,name) +#define LIQUID_AUTOCORR_MANGLE_RRRF(name) LIQUID_CONCAT(autocorr_rrrf,name) + +// Macro: +// AUTOCORR : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_AUTOCORR_DEFINE_API(AUTOCORR,TO,TC,TI) \ + \ +/* Computes auto-correlation with a fixed lag on input signals */ \ +typedef struct AUTOCORR(_s) * AUTOCORR(); \ + \ +/* Create auto-correlator object with a particular window length and */ \ +/* delay */ \ +/* _window_size : size of the correlator window */ \ +/* _delay : correlator delay [samples] */ \ +AUTOCORR() AUTOCORR(_create)(unsigned int _window_size, \ + unsigned int _delay); \ + \ +/* Destroy auto-correlator object, freeing internal memory */ \ +void AUTOCORR(_destroy)(AUTOCORR() _q); \ + \ +/* Reset auto-correlator object's internals */ \ +void AUTOCORR(_reset)(AUTOCORR() _q); \ + \ +/* Print auto-correlator parameters to stdout */ \ +void AUTOCORR(_print)(AUTOCORR() _q); \ + \ +/* Push sample into auto-correlator object */ \ +/* _q : auto-correlator object */ \ +/* _x : single input sample */ \ +void AUTOCORR(_push)(AUTOCORR() _q, \ + TI _x); \ + \ +/* Write block of samples to auto-correlator object */ \ +/* _q : auto-correlation object */ \ +/* _x : input array [size: _n x 1] */ \ +/* _n : number of input samples */ \ +void AUTOCORR(_write)(AUTOCORR() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Compute single auto-correlation output */ \ +/* _q : auto-correlator object */ \ +/* _rxx : auto-correlated output */ \ +void AUTOCORR(_execute)(AUTOCORR() _q, \ + TO * _rxx); \ + \ +/* Compute auto-correlation on block of samples; the input and output */ \ +/* arrays may have the same pointer */ \ +/* _q : auto-correlation object */ \ +/* _x : input array [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _rxx : input array [size: _n x 1] */ \ +void AUTOCORR(_execute_block)(AUTOCORR() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _rxx); \ + \ +/* return sum of squares of buffered samples */ \ +float AUTOCORR(_get_energy)(AUTOCORR() _q); \ + +LIQUID_AUTOCORR_DEFINE_API(LIQUID_AUTOCORR_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +LIQUID_AUTOCORR_DEFINE_API(LIQUID_AUTOCORR_MANGLE_RRRF, + float, + float, + float) + + +// +// Finite impulse response filter +// + +#define LIQUID_FIRFILT_MANGLE_RRRF(name) LIQUID_CONCAT(firfilt_rrrf,name) +#define LIQUID_FIRFILT_MANGLE_CRCF(name) LIQUID_CONCAT(firfilt_crcf,name) +#define LIQUID_FIRFILT_MANGLE_CCCF(name) LIQUID_CONCAT(firfilt_cccf,name) + +// Macro: +// FIRFILT : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FIRFILT_DEFINE_API(FIRFILT,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) filter */ \ +typedef struct FIRFILT(_s) * FIRFILT(); \ + \ +/* Create a finite impulse response filter (firfilt) object by directly */ \ +/* specifying the filter coefficients in an array */ \ +/* _h : filter coefficients [size: _n x 1] */ \ +/* _n : number of filter coefficients, _n > 0 */ \ +FIRFILT() FIRFILT(_create)(TC * _h, \ + unsigned int _n); \ + \ +/* Create object using Kaiser-Bessel windowed sinc method */ \ +/* _n : filter length, _n > 0 */ \ +/* _fc : filter normalized cut-off frequency, 0 < _fc < 0.5 */ \ +/* _As : filter stop-band attenuation [dB], _As > 0 */ \ +/* _mu : fractional sample offset, -0.5 < _mu < 0.5 */ \ +FIRFILT() FIRFILT(_create_kaiser)(unsigned int _n, \ + float _fc, \ + float _As, \ + float _mu); \ + \ +/* Create object from square-root Nyquist prototype. */ \ +/* The filter length will be \(2 k m + 1 \) samples long with a delay */ \ +/* of \( k m + 1 \) samples. */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _k : nominal samples per symbol, _k > 1 */ \ +/* _m : filter delay [symbols], _m > 0 */ \ +/* _beta : rolloff factor, 0 < beta <= 1 */ \ +/* _mu : fractional sample offset [samples], -0.5 < _mu < 0.5 */ \ +FIRFILT() FIRFILT(_create_rnyquist)(int _type, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + float _mu); \ + \ +/* Create object from Parks-McClellan algorithm prototype */ \ +/* _h_len : filter length, _h_len > 0 */ \ +/* _fc : cutoff frequency, 0 < _fc < 0.5 */ \ +/* _As : stop-band attenuation [dB], _As > 0 */ \ +FIRFILT() FIRFILT(_create_firdespm)(unsigned int _h_len, \ + float _fc, \ + float _As); \ + \ +/* Create rectangular filter prototype; that is */ \ +/* \( \vec{h} = \{ 1, 1, 1, \ldots 1 \} \) */ \ +/* _n : length of filter [samples], 0 < _n <= 1024 */ \ +FIRFILT() FIRFILT(_create_rect)(unsigned int _n); \ + \ +/* Create DC blocking filter from prototype */ \ +/* _m : prototype filter semi-length such that filter length is 2*m+1 */ \ +/* _As : prototype filter stop-band attenuation [dB], _As > 0 */ \ +FIRFILT() FIRFILT(_create_dc_blocker)(unsigned int _m, \ + float _As); \ + \ +/* Create notch filter from prototype */ \ +/* _m : prototype filter semi-length such that filter length is 2*m+1 */ \ +/* _As : prototype filter stop-band attenuation [dB], _As > 0 */ \ +/* _f0 : center frequency for notch, _fc in [-0.5, 0.5] */ \ +FIRFILT() FIRFILT(_create_notch)(unsigned int _m, \ + float _As, \ + float _f0); \ + \ +/* Re-create filter object of potentially a different length with */ \ +/* different coefficients. If the length of the filter does not change, */ \ +/* not memory reallocation is invoked. */ \ +/* _q : original filter object */ \ +/* _h : pointer to filter coefficients, [size: _n x 1] */ \ +/* _n : filter length, _n > 0 */ \ +FIRFILT() FIRFILT(_recreate)(FIRFILT() _q, \ + TC * _h, \ + unsigned int _n); \ + \ +/* Destroy filter object and free all internal memory */ \ +void FIRFILT(_destroy)(FIRFILT() _q); \ + \ +/* Reset filter object's internal buffer */ \ +void FIRFILT(_reset)(FIRFILT() _q); \ + \ +/* Print filter object information to stdout */ \ +void FIRFILT(_print)(FIRFILT() _q); \ + \ +/* Set output scaling for filter */ \ +/* _q : filter object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRFILT(_set_scale)(FIRFILT() _q, \ + TC _scale); \ + \ +/* Get output scaling for filter */ \ +/* _q : filter object */ \ +/* _scale : scaling factor applied to each output sample */ \ +void FIRFILT(_get_scale)(FIRFILT() _q, \ + TC * _scale); \ + \ +/* Push sample into filter object's internal buffer */ \ +/* _q : filter object */ \ +/* _x : single input sample */ \ +void FIRFILT(_push)(FIRFILT() _q, \ + TI _x); \ + \ +/* Write block of samples into filter object's internal buffer */ \ +/* _q : filter object */ \ +/* _x : buffer of input samples, [size: _n x 1] */ \ +/* _n : number of input samples */ \ +void FIRFILT(_write)(FIRFILT() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Execute vector dot product on the filter's internal buffer and */ \ +/* coefficients */ \ +/* _q : filter object */ \ +/* _y : pointer to single output sample */ \ +void FIRFILT(_execute)(FIRFILT() _q, \ + TO * _y); \ + \ +/* Execute the filter on a block of input samples; in-place operation */ \ +/* is permitted (_x and _y may point to the same place in memory) */ \ +/* _q : filter object */ \ +/* _x : pointer to input array, [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _y : pointer to output array, [size: _n x 1] */ \ +void FIRFILT(_execute_block)(FIRFILT() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Get length of filter object (number of internal coefficients) */ \ +unsigned int FIRFILT(_get_length)(FIRFILT() _q); \ + \ +/* Compute complex frequency response of filter object */ \ +/* _q : filter object */ \ +/* _fc : normalized frequency for evaluation */ \ +/* _H : pointer to output complex frequency response */ \ +void FIRFILT(_freqresponse)(FIRFILT() _q, \ + float _fc, \ + liquid_float_complex * _H); \ + \ +/* Compute and return group delay of filter object */ \ +/* _q : filter object */ \ +/* _fc : frequency to evaluate */ \ +float FIRFILT(_groupdelay)(FIRFILT() _q, \ + float _fc); \ + +LIQUID_FIRFILT_DEFINE_API(LIQUID_FIRFILT_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FIRFILT_DEFINE_API(LIQUID_FIRFILT_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FIRFILT_DEFINE_API(LIQUID_FIRFILT_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// +// FIR Hilbert transform +// 2:1 real-to-complex decimator +// 1:2 complex-to-real interpolator +// + +#define LIQUID_FIRHILB_MANGLE_FLOAT(name) LIQUID_CONCAT(firhilbf, name) +//#define LIQUID_FIRHILB_MANGLE_DOUBLE(name) LIQUID_CONCAT(firhilb, name) + +// NOTES: +// Although firhilb is a placeholder for both decimation and +// interpolation, separate objects should be used for each task. +#define LIQUID_FIRHILB_DEFINE_API(FIRHILB,T,TC) \ + \ +/* Finite impulse response (FIR) Hilbert transform */ \ +typedef struct FIRHILB(_s) * FIRHILB(); \ + \ +/* Create a firhilb object with a particular filter semi-length and */ \ +/* desired stop-band attenuation. */ \ +/* Internally the object designs a half-band filter based on applying */ \ +/* a Kaiser-Bessel window to a sinc function to guarantee zeros at all */ \ +/* off-center odd indexed samples. */ \ +/* _m : filter semi-length, delay is \( 2 m + 1 \) */ \ +/* _As : filter stop-band attenuation [dB] */ \ +FIRHILB() FIRHILB(_create)(unsigned int _m, \ + float _As); \ + \ +/* Destroy finite impulse response Hilbert transform, freeing all */ \ +/* internally-allocted memory and objects. */ \ +void FIRHILB(_destroy)(FIRHILB() _q); \ + \ +/* Print firhilb object internals to stdout */ \ +void FIRHILB(_print)(FIRHILB() _q); \ + \ +/* Reset firhilb object internal state */ \ +void FIRHILB(_reset)(FIRHILB() _q); \ + \ +/* Execute Hilbert transform (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input sample */ \ +/* _y : complex-valued output sample */ \ +void FIRHILB(_r2c_execute)(FIRHILB() _q, \ + T _x, \ + TC * _y); \ + \ +/* Execute Hilbert transform (complex to real) */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input sample */ \ +/* _y0 : real-valued output sample, lower side-band retained */ \ +/* _y1 : real-valued output sample, upper side-band retained */ \ +void FIRHILB(_c2r_execute)(FIRHILB() _q, \ + TC _x, \ + T * _y0, \ + T * _y1); \ + \ +/* Execute Hilbert transform decimator (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input array, [size: 2 x 1] */ \ +/* _y : complex-valued output sample */ \ +void FIRHILB(_decim_execute)(FIRHILB() _q, \ + T * _x, \ + TC * _y); \ + \ +/* Execute Hilbert transform decimator (real to complex) on a block of */ \ +/* samples */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input array, [size: 2*_n x 1] */ \ +/* _n : number of output samples */ \ +/* _y : complex-valued output array, [size: _n x 1] */ \ +void FIRHILB(_decim_execute_block)(FIRHILB() _q, \ + T * _x, \ + unsigned int _n, \ + TC * _y); \ + \ +/* Execute Hilbert transform interpolator (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input sample */ \ +/* _y : real-valued output array, [size: 2 x 1] */ \ +void FIRHILB(_interp_execute)(FIRHILB() _q, \ + TC _x, \ + T * _y); \ + \ +/* Execute Hilbert transform interpolator (complex to real) on a block */ \ +/* of samples */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input array, [size: _n x 1] */ \ +/* _n : number of *input* samples */ \ +/* _y : real-valued output array, [size: 2*_n x 1] */ \ +void FIRHILB(_interp_execute_block)(FIRHILB() _q, \ + TC * _x, \ + unsigned int _n, \ + T * _y); \ + +LIQUID_FIRHILB_DEFINE_API(LIQUID_FIRHILB_MANGLE_FLOAT, float, liquid_float_complex) +//LIQUID_FIRHILB_DEFINE_API(LIQUID_FIRHILB_MANGLE_DOUBLE, double, liquid_double_complex) + + +// +// Infinite impulse response (IIR) Hilbert transform +// 2:1 real-to-complex decimator +// 1:2 complex-to-real interpolator +// + +#define LIQUID_IIRHILB_MANGLE_FLOAT(name) LIQUID_CONCAT(iirhilbf, name) +//#define LIQUID_IIRHILB_MANGLE_DOUBLE(name) LIQUID_CONCAT(iirhilb, name) + +// NOTES: +// Although iirhilb is a placeholder for both decimation and +// interpolation, separate objects should be used for each task. +#define LIQUID_IIRHILB_DEFINE_API(IIRHILB,T,TC) \ + \ +/* Infinite impulse response (IIR) Hilbert transform */ \ +typedef struct IIRHILB(_s) * IIRHILB(); \ + \ +/* Create a iirhilb object with a particular filter type, order, and */ \ +/* desired pass- and stop-band attenuation. */ \ +/* _ftype : filter type (e.g. LIQUID_IIRDES_BUTTER) */ \ +/* _n : filter order, _n > 0 */ \ +/* _Ap : pass-band ripple [dB], _Ap > 0 */ \ +/* _As : stop-band ripple [dB], _Ap > 0 */ \ +IIRHILB() IIRHILB(_create)(liquid_iirdes_filtertype _ftype, \ + unsigned int _n, \ + float _Ap, \ + float _As); \ + \ +/* Create a default iirhilb object with a particular filter order. */ \ +/* _n : filter order, _n > 0 */ \ +IIRHILB() IIRHILB(_create_default)(unsigned int _n); \ + \ +/* Destroy finite impulse response Hilbert transform, freeing all */ \ +/* internally-allocted memory and objects. */ \ +void IIRHILB(_destroy)(IIRHILB() _q); \ + \ +/* Print iirhilb object internals to stdout */ \ +void IIRHILB(_print)(IIRHILB() _q); \ + \ +/* Reset iirhilb object internal state */ \ +void IIRHILB(_reset)(IIRHILB() _q); \ + \ +/* Execute Hilbert transform (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input sample */ \ +/* _y : complex-valued output sample */ \ +void IIRHILB(_r2c_execute)(IIRHILB() _q, \ + T _x, \ + TC * _y); \ + \ +/* Execute Hilbert transform (complex to real) */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input sample */ \ +/* _y : real-valued output sample */ \ +void IIRHILB(_c2r_execute)(IIRHILB() _q, \ + TC _x, \ + T * _y); \ + \ +/* Execute Hilbert transform decimator (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input array, [size: 2 x 1] */ \ +/* _y : complex-valued output sample */ \ +void IIRHILB(_decim_execute)(IIRHILB() _q, \ + T * _x, \ + TC * _y); \ + \ +/* Execute Hilbert transform decimator (real to complex) on a block of */ \ +/* samples */ \ +/* _q : Hilbert transform object */ \ +/* _x : real-valued input array, [size: 2*_n x 1] */ \ +/* _n : number of output samples */ \ +/* _y : complex-valued output array, [size: _n x 1] */ \ +void IIRHILB(_decim_execute_block)(IIRHILB() _q, \ + T * _x, \ + unsigned int _n, \ + TC * _y); \ + \ +/* Execute Hilbert transform interpolator (real to complex) */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input sample */ \ +/* _y : real-valued output array, [size: 2 x 1] */ \ +void IIRHILB(_interp_execute)(IIRHILB() _q, \ + TC _x, \ + T * _y); \ + \ +/* Execute Hilbert transform interpolator (complex to real) on a block */ \ +/* of samples */ \ +/* _q : Hilbert transform object */ \ +/* _x : complex-valued input array, [size: _n x 1] */ \ +/* _n : number of *input* samples */ \ +/* _y : real-valued output array, [size: 2*_n x 1] */ \ +void IIRHILB(_interp_execute_block)(IIRHILB() _q, \ + TC * _x, \ + unsigned int _n, \ + T * _y); \ + +LIQUID_IIRHILB_DEFINE_API(LIQUID_IIRHILB_MANGLE_FLOAT, float, liquid_float_complex) +//LIQUID_IIRHILB_DEFINE_API(LIQUID_IIRHILB_MANGLE_DOUBLE, double, liquid_double_complex) + + +// +// FFT-based finite impulse response filter +// + +#define LIQUID_FFTFILT_MANGLE_RRRF(name) LIQUID_CONCAT(fftfilt_rrrf,name) +#define LIQUID_FFTFILT_MANGLE_CRCF(name) LIQUID_CONCAT(fftfilt_crcf,name) +#define LIQUID_FFTFILT_MANGLE_CCCF(name) LIQUID_CONCAT(fftfilt_cccf,name) + +// Macro: +// FFTFILT : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FFTFILT_DEFINE_API(FFTFILT,TO,TC,TI) \ + \ +/* Fast Fourier transform (FFT) finite impulse response filter */ \ +typedef struct FFTFILT(_s) * FFTFILT(); \ + \ +/* Create FFT-based FIR filter using external coefficients */ \ +/* _h : filter coefficients, [size: _h_len x 1] */ \ +/* _h_len : filter length, _h_len > 0 */ \ +/* _n : block size = nfft/2, _n >= _h_len-1 */ \ +FFTFILT() FFTFILT(_create)(TC * _h, \ + unsigned int _h_len, \ + unsigned int _n); \ + \ +/* Destroy filter object and free all internal memory */ \ +void FFTFILT(_destroy)(FFTFILT() _q); \ + \ +/* Reset filter object's internal buffer */ \ +void FFTFILT(_reset)(FFTFILT() _q); \ + \ +/* Print filter object information to stdout */ \ +void FFTFILT(_print)(FFTFILT() _q); \ + \ +/* Set output scaling for filter */ \ +void FFTFILT(_set_scale)(FFTFILT() _q, \ + TC _scale); \ + \ +/* Get output scaling for filter */ \ +void FFTFILT(_get_scale)(FFTFILT() _q, \ + TC * _scale); \ + \ +/* Execute the filter on internal buffer and coefficients given a block */ \ +/* of input samples; in-place operation is permitted (_x and _y may */ \ +/* point to the same place in memory) */ \ +/* _q : filter object */ \ +/* _x : pointer to input data array, [size: _n x 1] */ \ +/* _y : pointer to output data array, [size: _n x 1] */ \ +void FFTFILT(_execute)(FFTFILT() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Get length of filter object's internal coefficients */ \ +unsigned int FFTFILT(_get_length)(FFTFILT() _q); \ + +LIQUID_FFTFILT_DEFINE_API(LIQUID_FFTFILT_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FFTFILT_DEFINE_API(LIQUID_FFTFILT_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FFTFILT_DEFINE_API(LIQUID_FFTFILT_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Infinite impulse response filter +// + +#define LIQUID_IIRFILT_MANGLE_RRRF(name) LIQUID_CONCAT(iirfilt_rrrf,name) +#define LIQUID_IIRFILT_MANGLE_CRCF(name) LIQUID_CONCAT(iirfilt_crcf,name) +#define LIQUID_IIRFILT_MANGLE_CCCF(name) LIQUID_CONCAT(iirfilt_cccf,name) + +// Macro: +// IIRFILT : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_IIRFILT_DEFINE_API(IIRFILT,TO,TC,TI) \ + \ +/* Infinite impulse response (IIR) filter */ \ +typedef struct IIRFILT(_s) * IIRFILT(); \ + \ +/* Create infinite impulse response filter from external coefficients. */ \ +/* Note that the number of feed-forward and feed-back coefficients do */ \ +/* not need to be equal, but they do need to be non-zero. */ \ +/* Furthermore, the first feed-back coefficient \(a_0\) cannot be */ \ +/* equal to zero, otherwise the filter will be invalid as this value is */ \ +/* factored out from all coefficients. */ \ +/* For stability reasons the number of coefficients should reasonably */ \ +/* not exceed about 8 for single-precision floating-point. */ \ +/* _b : feed-forward coefficients (numerator), [size: _nb x 1] */ \ +/* _nb : number of feed-forward coefficients, _nb > 0 */ \ +/* _a : feed-back coefficients (denominator), [size: _na x 1] */ \ +/* _na : number of feed-back coefficients, _na > 0 */ \ +IIRFILT() IIRFILT(_create)(TC * _b, \ + unsigned int _nb, \ + TC * _a, \ + unsigned int _na); \ + \ +/* Create IIR filter using 2nd-order secitons from external */ \ +/* coefficients. */ \ +/* _B : feed-forward coefficients [size: _nsos x 3] */ \ +/* _A : feed-back coefficients [size: _nsos x 3] */ \ +/* _nsos : number of second-order sections (sos), _nsos > 0 */ \ +IIRFILT() IIRFILT(_create_sos)(TC * _B, \ + TC * _A, \ + unsigned int _nsos); \ + \ +/* Create IIR filter from design template */ \ +/* _ftype : filter type (e.g. LIQUID_IIRDES_BUTTER) */ \ +/* _btype : band type (e.g. LIQUID_IIRDES_BANDPASS) */ \ +/* _format : coefficients format (e.g. LIQUID_IIRDES_SOS) */ \ +/* _order : filter order, _order > 0 */ \ +/* _fc : low-pass prototype cut-off frequency, 0 <= _fc <= 0.5 */ \ +/* _f0 : center frequency (band-pass, band-stop), 0 <= _f0 <= 0.5 */ \ +/* _Ap : pass-band ripple in dB, _Ap > 0 */ \ +/* _As : stop-band ripple in dB, _As > 0 */ \ +IIRFILT() IIRFILT(_create_prototype)( \ + liquid_iirdes_filtertype _ftype, \ + liquid_iirdes_bandtype _btype, \ + liquid_iirdes_format _format, \ + unsigned int _order, \ + float _fc, \ + float _f0, \ + float _Ap, \ + float _As); \ + \ +/* Create simplified low-pass Butterworth IIR filter */ \ +/* _order : filter order, _order > 0 */ \ +/* _fc : low-pass prototype cut-off frequency */ \ +IIRFILT() IIRFILT(_create_lowpass)(unsigned int _order, \ + float _fc); \ + \ +/* Create 8th-order integrator filter */ \ +IIRFILT() IIRFILT(_create_integrator)(void); \ + \ +/* Create 8th-order differentiator filter */ \ +IIRFILT() IIRFILT(_create_differentiator)(void); \ + \ +/* Create simple first-order DC-blocking filter with transfer function */ \ +/* \( H(z) = \frac{1 - z^{-1}}{1 - (1-\alpha)z^{-1}} \) */ \ +/* _alpha : normalized filter bandwidth, _alpha > 0 */ \ +IIRFILT() IIRFILT(_create_dc_blocker)(float _alpha); \ + \ +/* Create filter to operate as second-order integrating phase-locked */ \ +/* loop (active lag design) */ \ +/* _w : filter bandwidth, 0 < _w < 1 */ \ +/* _zeta : damping factor, \( 1/\sqrt{2} \) suggested, 0 < _zeta < 1 */ \ +/* _K : loop gain, 1000 suggested, _K > 0 */ \ +IIRFILT() IIRFILT(_create_pll)(float _w, \ + float _zeta, \ + float _K); \ + \ +/* Destroy iirfilt object, freeing all internal memory */ \ +void IIRFILT(_destroy)(IIRFILT() _q); \ + \ +/* Print iirfilt object properties to stdout */ \ +void IIRFILT(_print)(IIRFILT() _q); \ + \ +/* Reset iirfilt object internals */ \ +void IIRFILT(_reset)(IIRFILT() _q); \ + \ +/* Compute filter output given a signle input sample */ \ +/* _q : iirfilt object */ \ +/* _x : input sample */ \ +/* _y : output sample pointer */ \ +void IIRFILT(_execute)(IIRFILT() _q, \ + TI _x, \ + TO * _y); \ + \ +/* Execute the filter on a block of input samples; */ \ +/* in-place operation is permitted (the input and output buffers may be */ \ +/* the same) */ \ +/* _q : filter object */ \ +/* _x : pointer to input array, [size: _n x 1] */ \ +/* _n : number of input, output samples, _n > 0 */ \ +/* _y : pointer to output array, [size: _n x 1] */ \ +void IIRFILT(_execute_block)(IIRFILT() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Return number of coefficients for iirfilt object (maximum between */ \ +/* the feed-forward and feed-back coefficients). Note that the filter */ \ +/* length = filter order + 1 */ \ +unsigned int IIRFILT(_get_length)(IIRFILT() _q); \ + \ +/* Compute complex frequency response of filter object */ \ +/* _q : filter object */ \ +/* _fc : normalized frequency for evaluation */ \ +/* _H : pointer to output complex frequency response */ \ +void IIRFILT(_freqresponse)(IIRFILT() _q, \ + float _fc, \ + liquid_float_complex * _H); \ + \ +/* Compute and return group delay of filter object */ \ +/* _q : filter object */ \ +/* _fc : frequency to evaluate */ \ +float IIRFILT(_groupdelay)(IIRFILT() _q, float _fc); \ + +LIQUID_IIRFILT_DEFINE_API(LIQUID_IIRFILT_MANGLE_RRRF, + float, + float, + float) + +LIQUID_IIRFILT_DEFINE_API(LIQUID_IIRFILT_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_IIRFILT_DEFINE_API(LIQUID_IIRFILT_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// FIR Polyphase filter bank +// +#define LIQUID_FIRPFB_MANGLE_RRRF(name) LIQUID_CONCAT(firpfb_rrrf,name) +#define LIQUID_FIRPFB_MANGLE_CRCF(name) LIQUID_CONCAT(firpfb_crcf,name) +#define LIQUID_FIRPFB_MANGLE_CCCF(name) LIQUID_CONCAT(firpfb_cccf,name) + +// Macro: +// FIRPFB : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FIRPFB_DEFINE_API(FIRPFB,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) polyphase filter bank (PFB) */ \ +typedef struct FIRPFB(_s) * FIRPFB(); \ + \ +/* Create firpfb object with _M sub-filter each of length _h_len/_M */ \ +/* from an external array of coefficients */ \ +/* _M : number of filters in the bank, _M > 1 */ \ +/* _h : coefficients, [size: _h_len x 1] */ \ +/* _h_len : filter length (multiple of _M), _h_len >= _M */ \ +FIRPFB() FIRPFB(_create)(unsigned int _M, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Create firpfb object using Kaiser-Bessel windowed sinc filter design */ \ +/* method, using default values for cut-off frequency and stop-band */ \ +/* attenuation. This is equivalent to: */ \ +/* FIRPFB(_create_kaiser)(_M, _m, 0.5, 60.0) */ \ +/* which creates a Nyquist filter at the appropriate cut-off frequency. */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +/* _m : filter semi-length [samples], _m > 0 */ \ +FIRPFB() FIRPFB(_create_default)(unsigned int _M, \ + unsigned int _m); \ + \ +/* Create firpfb object using Kaiser-Bessel windowed sinc filter design */ \ +/* method */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +/* _m : filter semi-length [samples], _m > 0 */ \ +/* _fc : filter normalized cut-off frequency, 0 < _fc < 0.5 */ \ +/* _As : filter stop-band suppression [dB], _As > 0 */ \ +FIRPFB() FIRPFB(_create_kaiser)(unsigned int _M, \ + unsigned int _m, \ + float _fc, \ + float _As); \ + \ +/* Create firpfb from square-root Nyquist prototype */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +/* _k : nominal samples/symbol, _k > 1 */ \ +/* _m : filter delay [symbols], _m > 0 */ \ +/* _beta : rolloff factor, 0 < _beta <= 1 */ \ +FIRPFB() FIRPFB(_create_rnyquist)(int _type, \ + unsigned int _M, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta); \ + \ +/* Create from square-root derivative Nyquist prototype */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +/* _k : nominal samples/symbol, _k > 1 */ \ +/* _m : filter delay [symbols], _m > 0 */ \ +/* _beta : rolloff factor, 0 < _beta <= 1 */ \ +FIRPFB() FIRPFB(_create_drnyquist)(int _type, \ + unsigned int _M, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta); \ + \ +/* Re-create firpfb object of potentially a different length with */ \ +/* different coefficients. If the length of the filter does not change, */ \ +/* not memory reallocation is invoked. */ \ +/* _q : original firpfb object */ \ +/* _M : number of filters in the bank, _M > 1 */ \ +/* _h : coefficients, [size: _h_len x 1] */ \ +/* _h_len : filter length (multiple of _M), _h_len >= _M */ \ +FIRPFB() FIRPFB(_recreate)(FIRPFB() _q, \ + unsigned int _M, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Destroy firpfb object, freeing all internal memory and destroying */ \ +/* all internal objects */ \ +void FIRPFB(_destroy)(FIRPFB() _q); \ + \ +/* Print firpfb object's parameters to stdout */ \ +void FIRPFB(_print)(FIRPFB() _q); \ + \ +/* Set output scaling for filter */ \ +/* _q : filter object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRPFB(_set_scale)(FIRPFB() _q, \ + TC _scale); \ + \ +/* Get output scaling for filter */ \ +/* _q : filter object */ \ +/* _scale : scaling factor applied to each output sample */ \ +void FIRPFB(_get_scale)(FIRPFB() _q, \ + TC * _scale); \ + \ +/* Reset firpfb object's internal buffer */ \ +void FIRPFB(_reset)(FIRPFB() _q); \ + \ +/* Push sample into filter object's internal buffer */ \ +/* _q : filter object */ \ +/* _x : single input sample */ \ +void FIRPFB(_push)(FIRPFB() _q, \ + TI _x); \ + \ +/* Execute vector dot product on the filter's internal buffer and */ \ +/* coefficients using the coefficients from sub-filter at index _i */ \ +/* _q : firpfb object */ \ +/* _i : index of filter to use */ \ +/* _y : pointer to output sample */ \ +void FIRPFB(_execute)(FIRPFB() _q, \ + unsigned int _i, \ + TO * _y); \ + \ +/* Execute the filter on a block of input samples, all using index _i. */ \ +/* In-place operation is permitted (_x and _y may point to the same */ \ +/* place in memory) */ \ +/* _q : firpfb object */ \ +/* _i : index of filter to use */ \ +/* _x : pointer to input array [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _y : pointer to output array [size: _n x 1] */ \ +void FIRPFB(_execute_block)(FIRPFB() _q, \ + unsigned int _i, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_FIRPFB_DEFINE_API(LIQUID_FIRPFB_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FIRPFB_DEFINE_API(LIQUID_FIRPFB_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FIRPFB_DEFINE_API(LIQUID_FIRPFB_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// +// Interpolators +// + +// firinterp : finite impulse response interpolator +#define LIQUID_FIRINTERP_MANGLE_RRRF(name) LIQUID_CONCAT(firinterp_rrrf,name) +#define LIQUID_FIRINTERP_MANGLE_CRCF(name) LIQUID_CONCAT(firinterp_crcf,name) +#define LIQUID_FIRINTERP_MANGLE_CCCF(name) LIQUID_CONCAT(firinterp_cccf,name) + +#define LIQUID_FIRINTERP_DEFINE_API(FIRINTERP,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) interpolator */ \ +typedef struct FIRINTERP(_s) * FIRINTERP(); \ + \ +/* Create interpolator from external coefficients. Internally the */ \ +/* interpolator creates a polyphase filter bank to efficiently realize */ \ +/* resampling of the input signal. */ \ +/* If the input filter length is not a multiple of the interpolation */ \ +/* factor, the object internally pads the coefficients with zeros to */ \ +/* compensate. */ \ +/* _M : interpolation factor, _M >= 2 */ \ +/* _h : filter coefficients, [size: _h_len x 1] */ \ +/* _h_len : filter length, _h_len >= _M */ \ +FIRINTERP() FIRINTERP(_create)(unsigned int _M, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Create interpolator from filter prototype prototype (Kaiser-Bessel */ \ +/* windowed-sinc function) */ \ +/* _M : interpolation factor, _M >= 2 */ \ +/* _m : filter delay [symbols], _m >= 1 */ \ +/* _As : stop-band attenuation [dB], _As >= 0 */ \ +FIRINTERP() FIRINTERP(_create_kaiser)(unsigned int _M, \ + unsigned int _m, \ + float _As); \ + \ +/* Create interpolator object from filter prototype */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RCOS) */ \ +/* _M : interpolation factor, _M > 1 */ \ +/* _m : filter delay (symbols), _m > 0 */ \ +/* _beta : excess bandwidth factor, 0 <= _beta <= 1 */ \ +/* _dt : fractional sample delay, -1 <= _dt <= 1 */ \ +FIRINTERP() FIRINTERP(_create_prototype)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + float _beta, \ + float _dt); \ + \ +/* Create linear interpolator object */ \ +/* _M : interpolation factor, _M > 1 */ \ +FIRINTERP() FIRINTERP(_create_linear)(unsigned int _M); \ + \ +/* Create window interpolator object */ \ +/* _M : interpolation factor, _M > 1 */ \ +/* _m : filter semi-length, _m > 0 */ \ +FIRINTERP() FIRINTERP(_create_window)(unsigned int _M, \ + unsigned int _m); \ + \ +/* Destroy firinterp object, freeing all internal memory */ \ +void FIRINTERP(_destroy)(FIRINTERP() _q); \ + \ +/* Print firinterp object's internal properties to stdout */ \ +void FIRINTERP(_print)(FIRINTERP() _q); \ + \ +/* Reset internal state */ \ +void FIRINTERP(_reset)(FIRINTERP() _q); \ + \ +/* Get interpolation rate */ \ +unsigned int FIRINTERP(_get_interp_rate)(FIRINTERP() _q); \ + \ +/* Set output scaling for interpolator */ \ +/* _q : interpolator object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRINTERP(_set_scale)(FIRINTERP() _q, \ + TC _scale); \ + \ +/* Get output scaling for interpolator */ \ +/* _q : interpolator object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRINTERP(_get_scale)(FIRINTERP() _q, \ + TC * _scale); \ + \ +/* Execute interpolation on single input sample and write \(M\) output */ \ +/* samples (\(M\) is the interpolation factor) */ \ +/* _q : firinterp object */ \ +/* _x : input sample */ \ +/* _y : output sample array, [size: _M x 1] */ \ +void FIRINTERP(_execute)(FIRINTERP() _q, \ + TI _x, \ + TO * _y); \ + \ +/* Execute interpolation on block of input samples */ \ +/* _q : firinterp object */ \ +/* _x : input array, [size: _n x 1] */ \ +/* _n : size of input array */ \ +/* _y : output sample array, [size: _M*_n x 1] */ \ +void FIRINTERP(_execute_block)(FIRINTERP() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_FIRINTERP_DEFINE_API(LIQUID_FIRINTERP_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FIRINTERP_DEFINE_API(LIQUID_FIRINTERP_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FIRINTERP_DEFINE_API(LIQUID_FIRINTERP_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// iirinterp : infinite impulse response interpolator +#define LIQUID_IIRINTERP_MANGLE_RRRF(name) LIQUID_CONCAT(iirinterp_rrrf,name) +#define LIQUID_IIRINTERP_MANGLE_CRCF(name) LIQUID_CONCAT(iirinterp_crcf,name) +#define LIQUID_IIRINTERP_MANGLE_CCCF(name) LIQUID_CONCAT(iirinterp_cccf,name) + +#define LIQUID_IIRINTERP_DEFINE_API(IIRINTERP,TO,TC,TI) \ + \ +/* Infinite impulse response (IIR) interpolator */ \ +typedef struct IIRINTERP(_s) * IIRINTERP(); \ + \ +/* Create infinite impulse response interpolator from external */ \ +/* coefficients. */ \ +/* Note that the number of feed-forward and feed-back coefficients do */ \ +/* not need to be equal, but they do need to be non-zero. */ \ +/* Furthermore, the first feed-back coefficient \(a_0\) cannot be */ \ +/* equal to zero, otherwise the filter will be invalid as this value is */ \ +/* factored out from all coefficients. */ \ +/* For stability reasons the number of coefficients should reasonably */ \ +/* not exceed about 8 for single-precision floating-point. */ \ +/* _M : interpolation factor, _M >= 2 */ \ +/* _b : feed-forward coefficients (numerator), [size: _nb x 1] */ \ +/* _nb : number of feed-forward coefficients, _nb > 0 */ \ +/* _a : feed-back coefficients (denominator), [size: _na x 1] */ \ +/* _na : number of feed-back coefficients, _na > 0 */ \ +IIRINTERP() IIRINTERP(_create)(unsigned int _M, \ + TC * _b, \ + unsigned int _nb, \ + TC * _a, \ + unsigned int _na); \ + \ +/* Create interpolator object with default Butterworth prototype */ \ +/* _M : interpolation factor, _M >= 2 */ \ +/* _order : filter order, _order > 0 */ \ +IIRINTERP() IIRINTERP(_create_default)(unsigned int _M, \ + unsigned int _order); \ + \ +/* Create IIR interpolator from prototype */ \ +/* _M : interpolation factor, _M >= 2 */ \ +/* _ftype : filter type (e.g. LIQUID_IIRDES_BUTTER) */ \ +/* _btype : band type (e.g. LIQUID_IIRDES_BANDPASS) */ \ +/* _format : coefficients format (e.g. LIQUID_IIRDES_SOS) */ \ +/* _order : filter order, _order > 0 */ \ +/* _fc : low-pass prototype cut-off frequency, 0 <= _fc <= 0.5 */ \ +/* _f0 : center frequency (band-pass, band-stop), 0 <= _f0 <= 0.5 */ \ +/* _Ap : pass-band ripple in dB, _Ap > 0 */ \ +/* _As : stop-band ripple in dB, _As > 0 */ \ +IIRINTERP() IIRINTERP(_create_prototype)( \ + unsigned int _M, \ + liquid_iirdes_filtertype _ftype, \ + liquid_iirdes_bandtype _btype, \ + liquid_iirdes_format _format, \ + unsigned int _order, \ + float _fc, \ + float _f0, \ + float _Ap, \ + float _As); \ + \ +/* Destroy interpolator object and free internal memory */ \ +void IIRINTERP(_destroy)(IIRINTERP() _q); \ + \ +/* Print interpolator object internals to stdout */ \ +void IIRINTERP(_print)(IIRINTERP() _q); \ + \ +/* Reset interpolator object */ \ +void IIRINTERP(_reset)(IIRINTERP() _q); \ + \ +/* Execute interpolation on single input sample and write \(M\) output */ \ +/* samples (\(M\) is the interpolation factor) */ \ +/* _q : iirinterp object */ \ +/* _x : input sample */ \ +/* _y : output sample array, [size: _M x 1] */ \ +void IIRINTERP(_execute)(IIRINTERP() _q, \ + TI _x, \ + TO * _y); \ + \ +/* Execute interpolation on block of input samples */ \ +/* _q : iirinterp object */ \ +/* _x : input array, [size: _n x 1] */ \ +/* _n : size of input array */ \ +/* _y : output sample array, [size: _M*_n x 1] */ \ +void IIRINTERP(_execute_block)(IIRINTERP() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Compute and return group delay of object */ \ +/* _q : filter object */ \ +/* _fc : frequency to evaluate */ \ +float IIRINTERP(_groupdelay)(IIRINTERP() _q, \ + float _fc); \ + +LIQUID_IIRINTERP_DEFINE_API(LIQUID_IIRINTERP_MANGLE_RRRF, + float, + float, + float) + +LIQUID_IIRINTERP_DEFINE_API(LIQUID_IIRINTERP_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_IIRINTERP_DEFINE_API(LIQUID_IIRINTERP_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// +// Decimators +// + +// firdecim : finite impulse response decimator +#define LIQUID_FIRDECIM_MANGLE_RRRF(name) LIQUID_CONCAT(firdecim_rrrf,name) +#define LIQUID_FIRDECIM_MANGLE_CRCF(name) LIQUID_CONCAT(firdecim_crcf,name) +#define LIQUID_FIRDECIM_MANGLE_CCCF(name) LIQUID_CONCAT(firdecim_cccf,name) + +#define LIQUID_FIRDECIM_DEFINE_API(FIRDECIM,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) decimator */ \ +typedef struct FIRDECIM(_s) * FIRDECIM(); \ + \ +/* Create decimator from external coefficients */ \ +/* _M : decimation factor, _M >= 2 */ \ +/* _h : filter coefficients, [size: _h_len x 1] */ \ +/* _h_len : filter length, _h_len >= _M */ \ +FIRDECIM() FIRDECIM(_create)(unsigned int _M, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Create decimator from filter prototype prototype (Kaiser-Bessel */ \ +/* windowed-sinc function) */ \ +/* _M : decimation factor, _M >= 2 */ \ +/* _m : filter delay [symbols], _m >= 1 */ \ +/* _As : stop-band attenuation [dB], _As >= 0 */ \ +FIRDECIM() FIRDECIM(_create_kaiser)(unsigned int _M, \ + unsigned int _m, \ + float _As); \ + \ +/* Create decimator object from filter prototype */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RCOS) */ \ +/* _M : interpolation factor, _M > 1 */ \ +/* _m : filter delay (symbols), _m > 0 */ \ +/* _beta : excess bandwidth factor, 0 <= _beta <= 1 */ \ +/* _dt : fractional sample delay, -1 <= _dt <= 1 */ \ +FIRDECIM() FIRDECIM(_create_prototype)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + float _beta, \ + float _dt); \ + \ +/* Destroy decimator object, freeing all internal memory */ \ +void FIRDECIM(_destroy)(FIRDECIM() _q); \ + \ +/* Print decimator object propreties to stdout */ \ +void FIRDECIM(_print)(FIRDECIM() _q); \ + \ +/* Reset decimator object internal state */ \ +void FIRDECIM(_reset)(FIRDECIM() _q); \ + \ +/* Get decimation rate */ \ +unsigned int FIRDECIM(_get_decim_rate)(FIRDECIM() _q); \ + \ +/* Set output scaling for decimator */ \ +/* _q : decimator object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRDECIM(_set_scale)(FIRDECIM() _q, \ + TC _scale); \ + \ +/* Get output scaling for decimator */ \ +/* _q : decimator object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void FIRDECIM(_get_scale)(FIRDECIM() _q, \ + TC * _scale); \ + \ +/* Execute decimator on _M input samples */ \ +/* _q : decimator object */ \ +/* _x : input samples, [size: _M x 1] */ \ +/* _y : output sample pointer */ \ +void FIRDECIM(_execute)(FIRDECIM() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Execute decimator on block of _n*_M input samples */ \ +/* _q : decimator object */ \ +/* _x : input array, [size: _n*_M x 1] */ \ +/* _n : number of _output_ samples */ \ +/* _y : output array, [_size: _n x 1] */ \ +void FIRDECIM(_execute_block)(FIRDECIM() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_FIRDECIM_DEFINE_API(LIQUID_FIRDECIM_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FIRDECIM_DEFINE_API(LIQUID_FIRDECIM_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FIRDECIM_DEFINE_API(LIQUID_FIRDECIM_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// iirdecim : infinite impulse response decimator +#define LIQUID_IIRDECIM_MANGLE_RRRF(name) LIQUID_CONCAT(iirdecim_rrrf,name) +#define LIQUID_IIRDECIM_MANGLE_CRCF(name) LIQUID_CONCAT(iirdecim_crcf,name) +#define LIQUID_IIRDECIM_MANGLE_CCCF(name) LIQUID_CONCAT(iirdecim_cccf,name) + +#define LIQUID_IIRDECIM_DEFINE_API(IIRDECIM,TO,TC,TI) \ + \ +/* Infinite impulse response (IIR) decimator */ \ +typedef struct IIRDECIM(_s) * IIRDECIM(); \ + \ +/* Create infinite impulse response decimator from external */ \ +/* coefficients. */ \ +/* Note that the number of feed-forward and feed-back coefficients do */ \ +/* not need to be equal, but they do need to be non-zero. */ \ +/* Furthermore, the first feed-back coefficient \(a_0\) cannot be */ \ +/* equal to zero, otherwise the filter will be invalid as this value is */ \ +/* factored out from all coefficients. */ \ +/* For stability reasons the number of coefficients should reasonably */ \ +/* not exceed about 8 for single-precision floating-point. */ \ +/* _M : decimation factor, _M >= 2 */ \ +/* _b : feed-forward coefficients (numerator), [size: _nb x 1] */ \ +/* _nb : number of feed-forward coefficients, _nb > 0 */ \ +/* _a : feed-back coefficients (denominator), [size: _na x 1] */ \ +/* _na : number of feed-back coefficients, _na > 0 */ \ +IIRDECIM() IIRDECIM(_create)(unsigned int _M, \ + TC * _b, \ + unsigned int _nb, \ + TC * _a, \ + unsigned int _na); \ + \ +/* Create decimator object with default Butterworth prototype */ \ +/* _M : decimation factor, _M >= 2 */ \ +/* _order : filter order, _order > 0 */ \ +IIRDECIM() IIRDECIM(_create_default)(unsigned int _M, \ + unsigned int _order); \ + \ +/* Create IIR decimator from prototype */ \ +/* _M : decimation factor, _M >= 2 */ \ +/* _ftype : filter type (e.g. LIQUID_IIRDES_BUTTER) */ \ +/* _btype : band type (e.g. LIQUID_IIRDES_BANDPASS) */ \ +/* _format : coefficients format (e.g. LIQUID_IIRDES_SOS) */ \ +/* _order : filter order, _order > 0 */ \ +/* _fc : low-pass prototype cut-off frequency, 0 <= _fc <= 0.5 */ \ +/* _f0 : center frequency (band-pass, band-stop), 0 <= _f0 <= 0.5 */ \ +/* _Ap : pass-band ripple in dB, _Ap > 0 */ \ +/* _As : stop-band ripple in dB, _As > 0 */ \ +IIRDECIM() IIRDECIM(_create_prototype)( \ + unsigned int _M, \ + liquid_iirdes_filtertype _ftype, \ + liquid_iirdes_bandtype _btype, \ + liquid_iirdes_format _format, \ + unsigned int _order, \ + float _fc, \ + float _f0, \ + float _Ap, \ + float _As); \ + \ +/* Destroy decimator object and free internal memory */ \ +void IIRDECIM(_destroy)(IIRDECIM() _q); \ + \ +/* Print decimator object internals */ \ +void IIRDECIM(_print)(IIRDECIM() _q); \ + \ +/* Reset decimator object */ \ +void IIRDECIM(_reset)(IIRDECIM() _q); \ + \ +/* Execute decimator on _M input samples */ \ +/* _q : decimator object */ \ +/* _x : input samples, [size: _M x 1] */ \ +/* _y : output sample pointer */ \ +void IIRDECIM(_execute)(IIRDECIM() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Execute decimator on block of _n*_M input samples */ \ +/* _q : decimator object */ \ +/* _x : input array, [size: _n*_M x 1] */ \ +/* _n : number of _output_ samples */ \ +/* _y : output array, [_sze: _n x 1] */ \ +void IIRDECIM(_execute_block)(IIRDECIM() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Compute and return group delay of object */ \ +/* _q : filter object */ \ +/* _fc : frequency to evaluate */ \ +float IIRDECIM(_groupdelay)(IIRDECIM() _q, \ + float _fc); \ + +LIQUID_IIRDECIM_DEFINE_API(LIQUID_IIRDECIM_MANGLE_RRRF, + float, + float, + float) + +LIQUID_IIRDECIM_DEFINE_API(LIQUID_IIRDECIM_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_IIRDECIM_DEFINE_API(LIQUID_IIRDECIM_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + + +// +// Half-band resampler +// +#define LIQUID_RESAMP2_MANGLE_RRRF(name) LIQUID_CONCAT(resamp2_rrrf,name) +#define LIQUID_RESAMP2_MANGLE_CRCF(name) LIQUID_CONCAT(resamp2_crcf,name) +#define LIQUID_RESAMP2_MANGLE_CCCF(name) LIQUID_CONCAT(resamp2_cccf,name) + +#define LIQUID_RESAMP2_DEFINE_API(RESAMP2,TO,TC,TI) \ + \ +/* Half-band resampler, implemented as a dyadic (half-band) polyphase */ \ +/* filter bank for interpolation, decimation, synthesis, and analysis. */ \ +typedef struct RESAMP2(_s) * RESAMP2(); \ + \ +/* Create half-band resampler from design prototype. */ \ +/* _m : filter semi-length (h_len = 4*m+1), _m >= 2 */ \ +/* _f0 : filter center frequency, -0.5 <= _f0 <= 0.5 */ \ +/* _As : stop-band attenuation [dB], _As > 0 */ \ +RESAMP2() RESAMP2(_create)(unsigned int _m, \ + float _f0, \ + float _As); \ + \ +/* Re-create half-band resampler with new properties */ \ +/* _q : original half-band resampler object */ \ +/* _m : filter semi-length (h_len = 4*m+1), _m >= 2 */ \ +/* _f0 : filter center frequency, -0.5 <= _f0 <= 0.5 */ \ +/* _As : stop-band attenuation [dB], _As > 0 */ \ +RESAMP2() RESAMP2(_recreate)(RESAMP2() _q, \ + unsigned int _m, \ + float _f0, \ + float _As); \ + \ +/* Destroy resampler, freeing all internally-allocated memory */ \ +void RESAMP2(_destroy)(RESAMP2() _q); \ + \ +/* print resampler object's internals to stdout */ \ +void RESAMP2(_print)(RESAMP2() _q); \ + \ +/* Reset internal buffer */ \ +void RESAMP2(_reset)(RESAMP2() _q); \ + \ +/* Get resampler filter delay (semi-length m) */ \ +unsigned int RESAMP2(_get_delay)(RESAMP2() _q); \ + \ +/* Execute resampler as half-band filter for a single input sample */ \ +/* \(x\) where \(y_0\) is the output of the effective low-pass filter, */ \ +/* and \(y_1\) is the output of the effective high-pass filter. */ \ +/* _q : resampler object */ \ +/* _x : input sample */ \ +/* _y0 : output sample pointer (low frequency) */ \ +/* _y1 : output sample pointer (high frequency) */ \ +void RESAMP2(_filter_execute)(RESAMP2() _q, \ + TI _x, \ + TO * _y0, \ + TO * _y1); \ + \ +/* Execute resampler as half-band analysis filterbank on a pair of */ \ +/* sequential time-domain input samples. */ \ +/* The decimated outputs of the low- and high-pass equivalent filters */ \ +/* are stored in \(y_0\) and \(y_1\), respectively. */ \ +/* _q : resampler object */ \ +/* _x : input array, [size: 2 x 1] */ \ +/* _y : output array, [size: 2 x 1] */ \ +void RESAMP2(_analyzer_execute)(RESAMP2() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Execute resampler as half-band synthesis filterbank on a pair of */ \ +/* input samples. The low- and high-pass input samples are provided by */ \ +/* \(x_0\) and \(x_1\), respectively. The sequential time-domain output */ \ +/* samples are stored in \(y_0\) and \(y_1\). */ \ +/* _q : resampler object */ \ +/* _x : input array [size: 2 x 1] */ \ +/* _y : output array [size: 2 x 1] */ \ +void RESAMP2(_synthesizer_execute)(RESAMP2() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Execute resampler as half-band decimator on a pair of sequential */ \ +/* time-domain input samples. */ \ +/* _q : resampler object */ \ +/* _x : input array [size: 2 x 1] */ \ +/* _y : output sample pointer */ \ +void RESAMP2(_decim_execute)(RESAMP2() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* Execute resampler as half-band interpolator on a single input sample */ \ +/* _q : resampler object */ \ +/* _x : input sample */ \ +/* _y : output array [size: 2 x 1] */ \ +void RESAMP2(_interp_execute)(RESAMP2() _q, \ + TI _x, \ + TO * _y); \ + +LIQUID_RESAMP2_DEFINE_API(LIQUID_RESAMP2_MANGLE_RRRF, + float, + float, + float) + +LIQUID_RESAMP2_DEFINE_API(LIQUID_RESAMP2_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_RESAMP2_DEFINE_API(LIQUID_RESAMP2_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Rational resampler +// +#define LIQUID_RRESAMP_MANGLE_RRRF(name) LIQUID_CONCAT(rresamp_rrrf,name) +#define LIQUID_RRESAMP_MANGLE_CRCF(name) LIQUID_CONCAT(rresamp_crcf,name) +#define LIQUID_RRESAMP_MANGLE_CCCF(name) LIQUID_CONCAT(rresamp_cccf,name) + +#define LIQUID_RRESAMP_DEFINE_API(RRESAMP,TO,TC,TI) \ + \ +/* Rational rate resampler, implemented as a polyphase filterbank */ \ +typedef struct RRESAMP(_s) * RRESAMP(); \ + \ +/* Create rational-rate resampler object from external coeffcients to */ \ +/* resample at an exact rate P/Q. */ \ +/* Note that to preserve the input filter coefficients, the greatest */ \ +/* common divisor (gcd) is not removed internally from _P and _Q when */ \ +/* this method is called. */ \ +/* _P : interpolation factor, P > 0 */ \ +/* _Q : decimation factor, Q > 0 */ \ +/* _m : filter semi-length (delay), 0 < _m */ \ +/* _h : filter coefficients, [size: 2*_P*_m x 1] */ \ +RRESAMP() RRESAMP(_create)(unsigned int _P, \ + unsigned int _Q, \ + unsigned int _m, \ + TC * _h); \ + \ +/* Create rational-rate resampler object from filter prototype to */ \ +/* resample at an exact rate P/Q. */ \ +/* Note that because the filter coefficients are computed internally */ \ +/* here, the greatest common divisor (gcd) from _P and _Q is internally */ \ +/* removed to improve speed. */ \ +/* _P : interpolation factor, P > 0 */ \ +/* _Q : decimation factor, Q > 0 */ \ +/* _m : filter semi-length (delay), 0 < _m */ \ +/* _bw : filter bandwidth relative to sample rate, 0 < _bw <= 0.5 */ \ +/* _As : filter stop-band attenuation [dB], 0 < _As */ \ +RRESAMP() RRESAMP(_create_kaiser)(unsigned int _P, \ + unsigned int _Q, \ + unsigned int _m, \ + float _bw, \ + float _As); \ + \ +/* Create rational-rate resampler object from filter prototype to */ \ +/* resample at an exact rate P/Q. */ \ +/* Note that because the filter coefficients are computed internally */ \ +/* here, the greatest common divisor (gcd) from _P and _Q is internally */ \ +/* removed to improve speed. */ \ +RRESAMP() RRESAMP(_create_prototype)(int _type, \ + unsigned int _P, \ + unsigned int _Q, \ + unsigned int _m, \ + float _beta); \ + \ +/* Create rational resampler object with a specified resampling rate of */ \ +/* exactly P/Q with default parameters. This is a simplified method to */ \ +/* provide a basic resampler with a baseline set of parameters, */ \ +/* abstracting away some of the complexities with the filterbank */ \ +/* design. */ \ +/* The default parameters are */ \ +/* m = 12 (filter semi-length), */ \ +/* bw = 0.5 (filter bandwidth), and */ \ +/* As = 60 dB (filter stop-band attenuation) */ \ +/* _P : interpolation factor, P > 0 */ \ +/* _Q : decimation factor, Q > 0 */ \ +RRESAMP() RRESAMP(_create_default)(unsigned int _P, \ + unsigned int _Q); \ + \ +/* Destroy resampler object, freeing all internal memory */ \ +void RRESAMP(_destroy)(RRESAMP() _q); \ + \ +/* Print resampler object internals to stdout */ \ +void RRESAMP(_print)(RRESAMP() _q); \ + \ +/* Reset resampler object internals */ \ +void RRESAMP(_reset)(RRESAMP() _q); \ + \ +/* Set output scaling for filter, default: \( 2 w \sqrt{P/Q} \) */ \ +/* _q : resampler object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void RRESAMP(_set_scale)(RRESAMP() _q, \ + TC _scale); \ + \ +/* Get output scaling for filter */ \ +/* _q : resampler object */ \ +/* _scale : scaling factor to apply to each output sample */ \ +void RRESAMP(_get_scale)(RRESAMP() _q, \ + TC * _scale); \ + \ +/* Get resampler delay (filter semi-length \(m\)) */ \ +unsigned int RRESAMP(_get_delay)(RRESAMP() _q); \ + \ +/* Get original interpolation factor \(P\) when object was created */ \ +/* before removing greatest common divisor */ \ +unsigned int RRESAMP(_get_P)(RRESAMP() _q); \ + \ +/* Get internal interpolation factor of resampler, \(P\), after */ \ +/* removing greatest common divisor */ \ +unsigned int RRESAMP(_get_interp)(RRESAMP() _q); \ + \ +/* Get original decimation factor \(Q\) when object was created */ \ +/* before removing greatest common divisor */ \ +unsigned int RRESAMP(_get_Q)(RRESAMP() _q); \ + \ +/* Get internal decimation factor of resampler, \(Q\), after removing */ \ +/* greatest common divisor */ \ +unsigned int RRESAMP(_get_decim)(RRESAMP() _q); \ + \ +/* Get block length (e.g. greatest common divisor) between original P */ \ +/* and Q values */ \ +unsigned int RRESAMP(_get_block_len)(RRESAMP() _q); \ + \ +/* Get rate of resampler, \(r = P/Q\) */ \ +float RRESAMP(_get_rate)(RRESAMP() _q); \ + \ +/* Execute rational-rate resampler on a block of input samples and */ \ +/* store the resulting samples in the output array. */ \ +/* Note that the size of the input and output buffers correspond to the */ \ +/* values of P and Q passed when the object was created, even if they */ \ +/* share a common divisor. Internally the rational resampler reduces P */ \ +/* and Q by their greatest commmon denominator to reduce processing; */ \ +/* however sometimes it is convenienct to create the object based on */ \ +/* expected output/input block sizes. This expectation is preserved. So */ \ +/* if an object is created with P=80 and Q=72, the object will */ \ +/* internally set P=10 and Q=9 (with a g.c.d of 8); however when */ \ +/* "execute" is called the resampler will still expect an input buffer */ \ +/* of 72 and an output buffer of 80. */ \ +/* _q : resamp object */ \ +/* _x : input sample array, [size: Q x 1] */ \ +/* _y : output sample array [size: P x 1] */ \ +void RRESAMP(_execute)(RRESAMP() _q, \ + TI * _x, \ + TO * _y); \ + +LIQUID_RRESAMP_DEFINE_API(LIQUID_RRESAMP_MANGLE_RRRF, + float, + float, + float) + +LIQUID_RRESAMP_DEFINE_API(LIQUID_RRESAMP_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_RRESAMP_DEFINE_API(LIQUID_RRESAMP_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Arbitrary resampler +// +#define LIQUID_RESAMP_MANGLE_RRRF(name) LIQUID_CONCAT(resamp_rrrf,name) +#define LIQUID_RESAMP_MANGLE_CRCF(name) LIQUID_CONCAT(resamp_crcf,name) +#define LIQUID_RESAMP_MANGLE_CCCF(name) LIQUID_CONCAT(resamp_cccf,name) + +#define LIQUID_RESAMP_DEFINE_API(RESAMP,TO,TC,TI) \ + \ +/* Arbitrary rate resampler, implemented as a polyphase filterbank */ \ +typedef struct RESAMP(_s) * RESAMP(); \ + \ +/* Create arbitrary resampler object from filter prototype */ \ +/* _rate : arbitrary resampling rate, 0 < _rate */ \ +/* _m : filter semi-length (delay), 0 < _m */ \ +/* _fc : filter cutoff frequency, 0 < _fc < 0.5 */ \ +/* _As : filter stop-band attenuation [dB], 0 < _As */ \ +/* _npfb : number of filters in the bank, 0 < _npfb */ \ +RESAMP() RESAMP(_create)(float _rate, \ + unsigned int _m, \ + float _fc, \ + float _As, \ + unsigned int _npfb); \ + \ +/* Create arbitrary resampler object with a specified input resampling */ \ +/* rate and default parameters. This is a simplified method to provide */ \ +/* a basic resampler with a baseline set of parameters, abstracting */ \ +/* away some of the complexities with the filterbank design. */ \ +/* The default parameters are */ \ +/* m = 7 (filter semi-length), */ \ +/* fc = min(0.49,_rate/2) (filter cutoff frequency), */ \ +/* As = 60 dB (filter stop-band attenuation), and */ \ +/* npfb = 64 (number of filters in the bank). */ \ +/* _rate : arbitrary resampling rate, 0 < _rate */ \ +RESAMP() RESAMP(_create_default)(float _rate); \ + \ +/* Destroy arbitrary resampler object, freeing all internal memory */ \ +void RESAMP(_destroy)(RESAMP() _q); \ + \ +/* Print resamp object internals to stdout */ \ +void RESAMP(_print)(RESAMP() _q); \ + \ +/* Reset resamp object internals */ \ +void RESAMP(_reset)(RESAMP() _q); \ + \ +/* Get resampler delay (filter semi-length \(m\)) */ \ +unsigned int RESAMP(_get_delay)(RESAMP() _q); \ + \ +/* Set rate of arbitrary resampler */ \ +/* _q : resampling object */ \ +/* _rate : new sampling rate, _rate > 0 */ \ +void RESAMP(_set_rate)(RESAMP() _q, \ + float _rate); \ + \ +/* Get rate of arbitrary resampler */ \ +float RESAMP(_get_rate)(RESAMP() _q); \ + \ +/* adjust rate of arbitrary resampler */ \ +/* _q : resampling object */ \ +/* _gamma : rate adjustment factor: rate <- rate * gamma, _gamma > 0 */ \ +void RESAMP(_adjust_rate)(RESAMP() _q, \ + float _gamma); \ + \ +/* Set resampling timing phase */ \ +/* _q : resampling object */ \ +/* _tau : sample timing phase, -1 <= _tau <= 1 */ \ +void RESAMP(_set_timing_phase)(RESAMP() _q, \ + float _tau); \ + \ +/* Adjust resampling timing phase */ \ +/* _q : resampling object */ \ +/* _delta : sample timing adjustment, -1 <= _delta <= 1 */ \ +void RESAMP(_adjust_timing_phase)(RESAMP() _q, \ + float _delta); \ + \ +/* Execute arbitrary resampler on a single input sample and store the */ \ +/* resulting samples in the output array. The number of output samples */ \ +/* is depenent upon the resampling rate but will be at most */ \ +/* \( \lceil{ r \rceil} \) samples. */ \ +/* _q : resamp object */ \ +/* _x : single input sample */ \ +/* _y : output sample array (pointer) */ \ +/* _num_written : number of samples written to _y */ \ +void RESAMP(_execute)(RESAMP() _q, \ + TI _x, \ + TO * _y, \ + unsigned int * _num_written); \ + \ +/* Execute arbitrary resampler on a block of input samples and store */ \ +/* the resulting samples in the output array. The number of output */ \ +/* samples is depenent upon the resampling rate and the number of input */ \ +/* samples but will be at most \( \lceil{ r n_x \rceil} \) samples. */ \ +/* _q : resamp object */ \ +/* _x : input buffer, [size: _nx x 1] */ \ +/* _nx : input buffer */ \ +/* _y : output sample array (pointer) */ \ +/* _ny : number of samples written to _y */ \ +void RESAMP(_execute_block)(RESAMP() _q, \ + TI * _x, \ + unsigned int _nx, \ + TO * _y, \ + unsigned int * _ny); \ + +LIQUID_RESAMP_DEFINE_API(LIQUID_RESAMP_MANGLE_RRRF, + float, + float, + float) + +LIQUID_RESAMP_DEFINE_API(LIQUID_RESAMP_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_RESAMP_DEFINE_API(LIQUID_RESAMP_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Multi-stage half-band resampler +// + +// resampling type (interpolator/decimator) +typedef enum { + LIQUID_RESAMP_INTERP=0, // interpolator + LIQUID_RESAMP_DECIM, // decimator +} liquid_resamp_type; + +#define LIQUID_MSRESAMP2_MANGLE_RRRF(name) LIQUID_CONCAT(msresamp2_rrrf,name) +#define LIQUID_MSRESAMP2_MANGLE_CRCF(name) LIQUID_CONCAT(msresamp2_crcf,name) +#define LIQUID_MSRESAMP2_MANGLE_CCCF(name) LIQUID_CONCAT(msresamp2_cccf,name) + +#define LIQUID_MSRESAMP2_DEFINE_API(MSRESAMP2,TO,TC,TI) \ + \ +/* Multi-stage half-band resampler, implemented as cascaded dyadic */ \ +/* (half-band) polyphase filter banks for interpolation and decimation. */ \ +typedef struct MSRESAMP2(_s) * MSRESAMP2(); \ + \ +/* Create multi-stage half-band resampler as either decimator or */ \ +/* interpolator. */ \ +/* _type : resampler type (e.g. LIQUID_RESAMP_DECIM) */ \ +/* _num_stages : number of resampling stages, _num_stages <= 16 */ \ +/* _fc : filter cut-off frequency, 0 < _fc < 0.5 */ \ +/* _f0 : filter center frequency (set to zero) */ \ +/* _As : stop-band attenuation [dB], _As > 0 */ \ +MSRESAMP2() MSRESAMP2(_create)(int _type, \ + unsigned int _num_stages, \ + float _fc, \ + float _f0, \ + float _As); \ + \ +/* Destroy multi-stage half-band resampler, freeing all internal memory */ \ +void MSRESAMP2(_destroy)(MSRESAMP2() _q); \ + \ +/* Print msresamp object internals to stdout */ \ +void MSRESAMP2(_print)(MSRESAMP2() _q); \ + \ +/* Reset msresamp object internal state */ \ +void MSRESAMP2(_reset)(MSRESAMP2() _q); \ + \ +/* Get multi-stage half-band resampling rate */ \ +float MSRESAMP2(_get_rate)(MSRESAMP2() _q); \ + \ +/* Get number of half-band resampling stages in object */ \ +unsigned int MSRESAMP2(_get_num_stages)(MSRESAMP2() _q); \ + \ +/* Get resampling type (LIQUID_RESAMP_DECIM, LIQUID_RESAMP_INTERP) */ \ +int MSRESAMP2(_get_type)(MSRESAMP2() _q); \ + \ +/* Get group delay (number of output samples) */ \ +float MSRESAMP2(_get_delay)(MSRESAMP2() _q); \ + \ +/* Execute multi-stage resampler, M = 2^num_stages */ \ +/* LIQUID_RESAMP_INTERP: input: 1, output: M */ \ +/* LIQUID_RESAMP_DECIM: input: M, output: 1 */ \ +/* _q : msresamp object */ \ +/* _x : input sample array */ \ +/* _y : output sample array */ \ +void MSRESAMP2(_execute)(MSRESAMP2() _q, \ + TI * _x, \ + TO * _y); \ + +LIQUID_MSRESAMP2_DEFINE_API(LIQUID_MSRESAMP2_MANGLE_RRRF, + float, + float, + float) + +LIQUID_MSRESAMP2_DEFINE_API(LIQUID_MSRESAMP2_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_MSRESAMP2_DEFINE_API(LIQUID_MSRESAMP2_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Multi-stage arbitrary resampler +// +#define LIQUID_MSRESAMP_MANGLE_RRRF(name) LIQUID_CONCAT(msresamp_rrrf,name) +#define LIQUID_MSRESAMP_MANGLE_CRCF(name) LIQUID_CONCAT(msresamp_crcf,name) +#define LIQUID_MSRESAMP_MANGLE_CCCF(name) LIQUID_CONCAT(msresamp_cccf,name) + +#define LIQUID_MSRESAMP_DEFINE_API(MSRESAMP,TO,TC,TI) \ + \ +/* Multi-stage half-band resampler, implemented as cascaded dyadic */ \ +/* (half-band) polyphase filter banks followed by an arbitrary rate */ \ +/* resampler for interpolation and decimation. */ \ +typedef struct MSRESAMP(_s) * MSRESAMP(); \ + \ +/* Create multi-stage arbitrary resampler */ \ +/* _r : resampling rate (output/input), _r > 0 */ \ +/* _As : stop-band attenuation [dB], _As > 0 */ \ +MSRESAMP() MSRESAMP(_create)(float _r, \ + float _As); \ + \ +/* Destroy multi-stage arbitrary resampler */ \ +void MSRESAMP(_destroy)(MSRESAMP() _q); \ + \ +/* Print msresamp object internals to stdout */ \ +void MSRESAMP(_print)(MSRESAMP() _q); \ + \ +/* Reset msresamp object internal state */ \ +void MSRESAMP(_reset)(MSRESAMP() _q); \ + \ +/* Get filter delay (output samples) */ \ +float MSRESAMP(_get_delay)(MSRESAMP() _q); \ + \ +/* get overall resampling rate */ \ +float MSRESAMP(_get_rate)(MSRESAMP() _q); \ + \ +/* Execute multi-stage resampler on one or more input samples. */ \ +/* The number of output samples is dependent upon the resampling rate */ \ +/* and the number of input samples. In general it is good practice to */ \ +/* allocate at least \( \lceil{ 1 + 2 r n_x \rceil} \) samples in the */ \ +/* output array to avoid overflows. */ \ +/* _q : msresamp object */ \ +/* _x : input sample array, [size: _nx x 1] */ \ +/* _nx : input sample array size */ \ +/* _y : pointer to output array for storing result */ \ +/* _ny : number of samples written to _y */ \ +void MSRESAMP(_execute)(MSRESAMP() _q, \ + TI * _x, \ + unsigned int _nx, \ + TO * _y, \ + unsigned int * _ny); \ + +LIQUID_MSRESAMP_DEFINE_API(LIQUID_MSRESAMP_MANGLE_RRRF, + float, + float, + float) + +LIQUID_MSRESAMP_DEFINE_API(LIQUID_MSRESAMP_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_MSRESAMP_DEFINE_API(LIQUID_MSRESAMP_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// +// Direct digital [up/down] synthesizer +// + +#define DDS_MANGLE_CCCF(name) LIQUID_CONCAT(dds_cccf,name) + +#define LIQUID_DDS_DEFINE_API(DDS,TO,TC,TI) \ +typedef struct DDS(_s) * DDS(); \ + \ +/* create digital synthesizer object */ \ +DDS() DDS(_create)(unsigned int _num_stages, \ + float _fc, \ + float _bw, \ + float _As); \ + \ +/* destroy digital synthesizer object */ \ +void DDS(_destroy)(DDS() _q); \ + \ +/* print synthesizer object internals to stdout */ \ +void DDS(_print)(DDS() _q); \ + \ +/* reset synthesizer object internals */ \ +void DDS(_reset)(DDS() _q); \ + \ +void DDS(_decim_execute)(DDS() _q, \ + TI * _x, \ + TO * _y); \ +void DDS(_interp_execute)(DDS() _q, \ + TI _x, \ + TO * _y); \ + +LIQUID_DDS_DEFINE_API(DDS_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Symbol timing recovery (symbol synchronizer) +// +#define LIQUID_SYMSYNC_MANGLE_RRRF(name) LIQUID_CONCAT(symsync_rrrf,name) +#define LIQUID_SYMSYNC_MANGLE_CRCF(name) LIQUID_CONCAT(symsync_crcf,name) + +#define LIQUID_SYMSYNC_DEFINE_API(SYMSYNC,TO,TC,TI) \ + \ +/* Multi-rate symbol synchronizer for symbol timing recovery. */ \ +typedef struct SYMSYNC(_s) * SYMSYNC(); \ + \ +/* Create synchronizer object from external coefficients */ \ +/* _k : samples per symbol, _k >= 2 */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +/* _h : matched filter coefficients, [size: _h_len x 1] */ \ +/* _h_len : length of matched filter; \( h_{len} = 2 k m + 1 \) */ \ +SYMSYNC() SYMSYNC(_create)(unsigned int _k, \ + unsigned int _M, \ + TC * _h, \ + unsigned int _h_len); \ + \ +/* Create square-root Nyquist symbol synchronizer from prototype */ \ +/* _type : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _k : samples/symbol, _k >= 2 */ \ +/* _m : symbol delay, _m > 0 */ \ +/* _beta : rolloff factor, 0 <= _beta <= 1 */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +SYMSYNC() SYMSYNC(_create_rnyquist)(int _type, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + unsigned int _M); \ + \ +/* Create symsync using Kaiser filter interpolator. This is useful when */ \ +/* the input signal has its matched filter applied already. */ \ +/* _k : input samples/symbol, _k >= 2 */ \ +/* _m : symbol delay, _m > 0 */ \ +/* _beta : rolloff factor, 0<= _beta <= 1 */ \ +/* _M : number of filters in the bank, _M > 0 */ \ +SYMSYNC() SYMSYNC(_create_kaiser)(unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + unsigned int _M); \ + \ +/* Destroy symsync object, freeing all internal memory */ \ +void SYMSYNC(_destroy)(SYMSYNC() _q); \ + \ +/* Print symsync object's parameters to stdout */ \ +void SYMSYNC(_print)(SYMSYNC() _q); \ + \ +/* Reset symsync internal state */ \ +void SYMSYNC(_reset)(SYMSYNC() _q); \ + \ +/* Lock the symbol synchronizer's loop control */ \ +void SYMSYNC(_lock)(SYMSYNC() _q); \ + \ +/* Unlock the symbol synchronizer's loop control */ \ +void SYMSYNC(_unlock)(SYMSYNC() _q); \ + \ +/* Set synchronizer output rate (samples/symbol) */ \ +/* _q : synchronizer object */ \ +/* _k_out : output samples/symbol, _k_out > 0 */ \ +void SYMSYNC(_set_output_rate)(SYMSYNC() _q, \ + unsigned int _k_out); \ + \ +/* Set loop-filter bandwidth */ \ +/* _q : synchronizer object */ \ +/* _bt : loop bandwidth, 0 <= _bt <= 1 */ \ +void SYMSYNC(_set_lf_bw)(SYMSYNC() _q, \ + float _bt); \ + \ +/* Return instantaneous fractional timing offset estimate */ \ +float SYMSYNC(_get_tau)(SYMSYNC() _q); \ + \ +/* Execute synchronizer on input data array */ \ +/* _q : synchronizer object */ \ +/* _x : input data array, [size: _nx x 1] */ \ +/* _nx : number of input samples */ \ +/* _y : output data array */ \ +/* _ny : number of samples written to output buffer */ \ +void SYMSYNC(_execute)(SYMSYNC() _q, \ + TI * _x, \ + unsigned int _nx, \ + TO * _y, \ + unsigned int * _ny); \ + +LIQUID_SYMSYNC_DEFINE_API(LIQUID_SYMSYNC_MANGLE_RRRF, + float, + float, + float) + +LIQUID_SYMSYNC_DEFINE_API(LIQUID_SYMSYNC_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + + +// +// Finite impulse response Farrow filter +// + +#define LIQUID_FIRFARROW_MANGLE_RRRF(name) LIQUID_CONCAT(firfarrow_rrrf,name) +#define LIQUID_FIRFARROW_MANGLE_CRCF(name) LIQUID_CONCAT(firfarrow_crcf,name) +//#define LIQUID_FIRFARROW_MANGLE_CCCF(name) LIQUID_CONCAT(firfarrow_cccf,name) + +// Macro: +// FIRFARROW : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FIRFARROW_DEFINE_API(FIRFARROW,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) Farrow filter for timing delay */ \ +typedef struct FIRFARROW(_s) * FIRFARROW(); \ + \ +/* Create firfarrow object */ \ +/* _h_len : filter length, _h_len >= 2 */ \ +/* _p : polynomial order, _p >= 1 */ \ +/* _fc : filter cutoff frequency, 0 <= _fc <= 0.5 */ \ +/* _As : stopband attenuation [dB], _As > 0 */ \ +FIRFARROW() FIRFARROW(_create)(unsigned int _h_len, \ + unsigned int _p, \ + float _fc, \ + float _As); \ + \ +/* Destroy firfarrow object, freeing all internal memory */ \ +void FIRFARROW(_destroy)(FIRFARROW() _q); \ + \ +/* Print firfarrow object's internal properties */ \ +void FIRFARROW(_print)(FIRFARROW() _q); \ + \ +/* Reset firfarrow object's internal state */ \ +void FIRFARROW(_reset)(FIRFARROW() _q); \ + \ +/* Push sample into firfarrow object */ \ +/* _q : firfarrow object */ \ +/* _x : input sample */ \ +void FIRFARROW(_push)(FIRFARROW() _q, \ + TI _x); \ + \ +/* Set fractional delay of firfarrow object */ \ +/* _q : firfarrow object */ \ +/* _mu : fractional sample delay, -1 <= _mu <= 1 */ \ +void FIRFARROW(_set_delay)(FIRFARROW() _q, \ + float _mu); \ + \ +/* Execute firfarrow internal dot product */ \ +/* _q : firfarrow object */ \ +/* _y : output sample pointer */ \ +void FIRFARROW(_execute)(FIRFARROW() _q, \ + TO * _y); \ + \ +/* Execute firfarrow filter on block of samples. */ \ +/* In-place operation is permitted (the input and output arrays may */ \ +/* share the same pointer) */ \ +/* _q : firfarrow object */ \ +/* _x : input array, [size: _n x 1] */ \ +/* _n : input, output array size */ \ +/* _y : output array, [size: _n x 1] */ \ +void FIRFARROW(_execute_block)(FIRFARROW() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + \ +/* Get length of firfarrow object (number of filter taps) */ \ +unsigned int FIRFARROW(_get_length)(FIRFARROW() _q); \ + \ +/* Get coefficients of firfarrow object */ \ +/* _q : firfarrow object */ \ +/* _h : output coefficients pointer, [size: _h_len x 1] */ \ +void FIRFARROW(_get_coefficients)(FIRFARROW() _q, \ + float * _h); \ + \ +/* Compute complex frequency response */ \ +/* _q : filter object */ \ +/* _fc : frequency */ \ +/* _H : output frequency response */ \ +void FIRFARROW(_freqresponse)(FIRFARROW() _q, \ + float _fc, \ + liquid_float_complex * _H); \ + \ +/* Compute group delay [samples] */ \ +/* _q : filter object */ \ +/* _fc : frequency */ \ +float FIRFARROW(_groupdelay)(FIRFARROW() _q, \ + float _fc); \ + +LIQUID_FIRFARROW_DEFINE_API(LIQUID_FIRFARROW_MANGLE_RRRF, + float, + float, + float) + +LIQUID_FIRFARROW_DEFINE_API(LIQUID_FIRFARROW_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + + +// +// Order-statistic filter +// + +#define LIQUID_ORDFILT_MANGLE_RRRF(name) LIQUID_CONCAT(ordfilt_rrrf,name) + +// Macro: +// ORDFILT : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_ORDFILT_DEFINE_API(ORDFILT,TO,TC,TI) \ + \ +/* Finite impulse response (FIR) filter */ \ +typedef struct ORDFILT(_s) * ORDFILT(); \ + \ +/* Create a order-statistic filter (ordfilt) object by specifying */ \ +/* the buffer size and appropriate sample index of order statistic. */ \ +/* _n : buffer size, _n > 0 */ \ +/* _k : sample index for order statistic, 0 <= _k < _n */ \ +ORDFILT() ORDFILT(_create)(unsigned int _n, \ + unsigned int _k); \ + \ +/* Create a median filter by specifying buffer semi-length. */ \ +/* _m : buffer semi-length */ \ +ORDFILT() ORDFILT(_create_medfilt)(unsigned int _m); \ + \ +/* Destroy filter object and free all internal memory */ \ +void ORDFILT(_destroy)(ORDFILT() _q); \ + \ +/* Reset filter object's internal buffer */ \ +void ORDFILT(_reset)(ORDFILT() _q); \ + \ +/* Print filter object information to stdout */ \ +void ORDFILT(_print)(ORDFILT() _q); \ + \ +/* Push sample into filter object's internal buffer */ \ +/* _q : filter object */ \ +/* _x : single input sample */ \ +void ORDFILT(_push)(ORDFILT() _q, \ + TI _x); \ + \ +/* Write block of samples into object's internal buffer */ \ +/* _q : filter object */ \ +/* _x : array of input samples, [size: _n x 1] */ \ +/* _n : number of input elements */ \ +void ORDFILT(_write)(ORDFILT() _q, \ + TI * _x, \ + unsigned int _n); \ + \ +/* Execute vector dot product on the filter's internal buffer and */ \ +/* coefficients */ \ +/* _q : filter object */ \ +/* _y : pointer to single output sample */ \ +void ORDFILT(_execute)(ORDFILT() _q, \ + TO * _y); \ + \ +/* Execute the filter on a block of input samples; in-place operation */ \ +/* is permitted (_x and _y may point to the same place in memory) */ \ +/* _q : filter object */ \ +/* _x : pointer to input array, [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _y : pointer to output array, [size: _n x 1] */ \ +void ORDFILT(_execute_block)(ORDFILT() _q, \ + TI * _x, \ + unsigned int _n, \ + TO * _y); \ + +LIQUID_ORDFILT_DEFINE_API(LIQUID_ORDFILT_MANGLE_RRRF, + float, + float, + float) + + +// +// MODULE : framing +// + +// framesyncstats : generic frame synchronizer statistic structure + +typedef struct { + // signal quality + float evm; // error vector magnitude [dB] + float rssi; // received signal strength indicator [dB] + float cfo; // carrier frequency offset (f/Fs) + + // demodulated frame symbols + liquid_float_complex * framesyms; // pointer to array [size: framesyms x 1] + unsigned int num_framesyms; // length of framesyms + + // modulation/coding scheme etc. + unsigned int mod_scheme; // modulation scheme + unsigned int mod_bps; // modulation depth (bits/symbol) + unsigned int check; // data validity check (crc, checksum) + unsigned int fec0; // forward error-correction (inner) + unsigned int fec1; // forward error-correction (outer) +} framesyncstats_s; + +// external framesyncstats default object +extern framesyncstats_s framesyncstats_default; + +// initialize framesyncstats object on default +int framesyncstats_init_default(framesyncstats_s * _stats); + +// print framesyncstats object +int framesyncstats_print(framesyncstats_s * _stats); + + +// framedatastats : gather frame data +typedef struct { + unsigned int num_frames_detected; + unsigned int num_headers_valid; + unsigned int num_payloads_valid; + unsigned long int num_bytes_received; +} framedatastats_s; + +// reset framedatastats object +int framedatastats_reset(framedatastats_s * _stats); + +// print framedatastats object +int framedatastats_print(framedatastats_s * _stats); + + +// Generic frame synchronizer callback function type +// _header : header data [size: 8 bytes] +// _header_valid : is header valid? (0:no, 1:yes) +// _payload : payload data [size: _payload_len] +// _payload_len : length of payload (bytes) +// _payload_valid : is payload valid? (0:no, 1:yes) +// _stats : frame statistics object +// _userdata : pointer to userdata +typedef int (*framesync_callback)(unsigned char * _header, + int _header_valid, + unsigned char * _payload, + unsigned int _payload_len, + int _payload_valid, + framesyncstats_s _stats, + void * _userdata); + +// framesync csma callback functions invoked when signal levels is high or low +// _userdata : user-defined data pointer +typedef void (*framesync_csma_callback)(void * _userdata); + +// +// packet encoder/decoder +// + +typedef struct qpacketmodem_s * qpacketmodem; + +// create packet encoder +qpacketmodem qpacketmodem_create (); +int qpacketmodem_destroy(qpacketmodem _q); +int qpacketmodem_reset (qpacketmodem _q); +int qpacketmodem_print (qpacketmodem _q); + +int qpacketmodem_configure(qpacketmodem _q, + unsigned int _payload_len, + crc_scheme _check, + fec_scheme _fec0, + fec_scheme _fec1, + int _ms); + +// get length of encoded frame in symbols +unsigned int qpacketmodem_get_frame_len(qpacketmodem _q); + +// get unencoded/decoded payload length (bytes) +unsigned int qpacketmodem_get_payload_len(qpacketmodem _q); + +// regular access methods +unsigned int qpacketmodem_get_crc (qpacketmodem _q); +unsigned int qpacketmodem_get_fec0 (qpacketmodem _q); +unsigned int qpacketmodem_get_fec1 (qpacketmodem _q); +unsigned int qpacketmodem_get_modscheme(qpacketmodem _q); + +float qpacketmodem_get_demodulator_phase_error(qpacketmodem _q); +float qpacketmodem_get_demodulator_evm(qpacketmodem _q); + +// encode packet into un-modulated frame symbol indices +// _q : qpacketmodem object +// _payload : unencoded payload bytes +// _syms : encoded but un-modulated payload symbol indices +int qpacketmodem_encode_syms(qpacketmodem _q, + const unsigned char * _payload, + unsigned char * _syms); + +// decode packet from demodulated frame symbol indices (hard-decision decoding) +// _q : qpacketmodem object +// _syms : received hard-decision symbol indices [size: frame_len x 1] +// _payload : recovered decoded payload bytes +int qpacketmodem_decode_syms(qpacketmodem _q, + unsigned char * _syms, + unsigned char * _payload); + +// decode packet from demodulated frame bits (soft-decision decoding) +// _q : qpacketmodem object +// _bits : received soft-decision bits, [size: bps*frame_len x 1] +// _payload : recovered decoded payload bytes +int qpacketmodem_decode_bits(qpacketmodem _q, + unsigned char * _bits, + unsigned char * _payload); + +// encode and modulate packet into modulated frame samples +// _q : qpacketmodem object +// _payload : unencoded payload bytes +// _frame : encoded/modulated payload symbols +int qpacketmodem_encode(qpacketmodem _q, + const unsigned char * _payload, + liquid_float_complex * _frame); + +// decode packet from modulated frame samples, returning flag if CRC passed +// NOTE: hard-decision decoding +// _q : qpacketmodem object +// _frame : encoded/modulated payload symbols +// _payload : recovered decoded payload bytes +int qpacketmodem_decode(qpacketmodem _q, + liquid_float_complex * _frame, + unsigned char * _payload); + +// decode packet from modulated frame samples, returning flag if CRC passed +// NOTE: soft-decision decoding +// _q : qpacketmodem object +// _frame : encoded/modulated payload symbols +// _payload : recovered decoded payload bytes +int qpacketmodem_decode_soft(qpacketmodem _q, + liquid_float_complex * _frame, + unsigned char * _payload); + +int qpacketmodem_decode_soft_sym(qpacketmodem _q, + liquid_float_complex _symbol); + +int qpacketmodem_decode_soft_payload(qpacketmodem _q, + unsigned char * _payload); + +// +// pilot generator/synchronizer for packet burst recovery +// + +// get number of pilots in frame +unsigned int qpilot_num_pilots(unsigned int _payload_len, + unsigned int _pilot_spacing); + +// get length of frame with a particular payload length and pilot spacing +unsigned int qpilot_frame_len(unsigned int _payload_len, + unsigned int _pilot_spacing); + +// +// pilot generator for packet burst recovery +// + +typedef struct qpilotgen_s * qpilotgen; + +// create packet encoder +qpilotgen qpilotgen_create(unsigned int _payload_len, + unsigned int _pilot_spacing); + +qpilotgen qpilotgen_recreate(qpilotgen _q, + unsigned int _payload_len, + unsigned int _pilot_spacing); + +int qpilotgen_destroy(qpilotgen _q); +int qpilotgen_reset( qpilotgen _q); +int qpilotgen_print( qpilotgen _q); + +unsigned int qpilotgen_get_frame_len(qpilotgen _q); + +// insert pilot symbols +int qpilotgen_execute(qpilotgen _q, + liquid_float_complex * _payload, + liquid_float_complex * _frame); + +// +// pilot synchronizer for packet burst recovery +// +typedef struct qpilotsync_s * qpilotsync; + +// create packet encoder +qpilotsync qpilotsync_create(unsigned int _payload_len, + unsigned int _pilot_spacing); + +qpilotsync qpilotsync_recreate(qpilotsync _q, + unsigned int _payload_len, + unsigned int _pilot_spacing); + +int qpilotsync_destroy(qpilotsync _q); +int qpilotsync_reset( qpilotsync _q); +int qpilotsync_print( qpilotsync _q); + +unsigned int qpilotsync_get_frame_len(qpilotsync _q); + +// recover frame symbols from received frame +int qpilotsync_execute(qpilotsync _q, + liquid_float_complex * _frame, + liquid_float_complex * _payload); + +// get estimates +float qpilotsync_get_dphi(qpilotsync _q); +float qpilotsync_get_phi (qpilotsync _q); +float qpilotsync_get_gain(qpilotsync _q); +float qpilotsync_get_evm (qpilotsync _q); + + +// +// Basic frame generator (64 bytes data payload) +// + +// frame length in samples +#define LIQUID_FRAME64_LEN (1440) + +typedef struct framegen64_s * framegen64; + +// create frame generator +framegen64 framegen64_create(); + +// destroy frame generator +int framegen64_destroy(framegen64 _q); + +// print frame generator internal properties +int framegen64_print(framegen64 _q); + +// generate frame +// _q : frame generator object +// _header : 8-byte header data, NULL for random +// _payload : 64-byte payload data, NULL for random +// _frame : output frame samples [size: LIQUID_FRAME64_LEN x 1] +int framegen64_execute(framegen64 _q, + unsigned char * _header, + unsigned char * _payload, + liquid_float_complex * _frame); + +typedef struct framesync64_s * framesync64; + +// create framesync64 object +// _callback : callback function +// _userdata : user data pointer passed to callback function +framesync64 framesync64_create(framesync_callback _callback, + void * _userdata); + +// destroy frame synchronizer +int framesync64_destroy(framesync64 _q); + +// print frame synchronizer internal properties +int framesync64_print(framesync64 _q); + +// reset frame synchronizer internal state +int framesync64_reset(framesync64 _q); + +// push samples through frame synchronizer +// _q : frame synchronizer object +// _x : input samples [size: _n x 1] +// _n : number of input samples +int framesync64_execute(framesync64 _q, + liquid_float_complex * _x, + unsigned int _n); + +// enable/disable debugging +int framesync64_debug_enable(framesync64 _q); +int framesync64_debug_disable(framesync64 _q); +int framesync64_debug_print(framesync64 _q, const char * _filename); + +// frame data statistics +int framesync64_reset_framedatastats(framesync64 _q); +framedatastats_s framesync64_get_framedatastats (framesync64 _q); + +#if 0 +// advanced modes +int framesync64_set_csma_callbacks(framesync64 _q, + framesync_csma_callback _csma_lock, + framesync_csma_callback _csma_unlock, + void * _csma_userdata); +#endif + +// +// Flexible frame : adjustable payload, mod scheme, etc., but bring +// your own error correction, redundancy check +// + +// frame generator +typedef struct { + unsigned int check; // data validity check + unsigned int fec0; // forward error-correction scheme (inner) + unsigned int fec1; // forward error-correction scheme (outer) + unsigned int mod_scheme; // modulation scheme +} flexframegenprops_s; + +int flexframegenprops_init_default(flexframegenprops_s * _fgprops); + +typedef struct flexframegen_s * flexframegen; + +// create flexframegen object +// _props : frame properties (modulation scheme, etc.) +flexframegen flexframegen_create(flexframegenprops_s * _props); + +// destroy flexframegen object +int flexframegen_destroy(flexframegen _q); + +// print flexframegen object internals +int flexframegen_print(flexframegen _q); + +// reset flexframegen object internals +int flexframegen_reset(flexframegen _q); + +// is frame assembled? +int flexframegen_is_assembled(flexframegen _q); + +// get frame properties +int flexframegen_getprops(flexframegen _q, flexframegenprops_s * _props); + +// set frame properties +int flexframegen_setprops(flexframegen _q, flexframegenprops_s * _props); + +// set length of user-defined portion of header +int flexframegen_set_header_len(flexframegen _q, unsigned int _len); + +// set properties for header section +int flexframegen_set_header_props(flexframegen _q, + flexframegenprops_s * _props); + +// get length of assembled frame (samples) +unsigned int flexframegen_getframelen(flexframegen _q); + +// assemble a frame from an array of data +// _q : frame generator object +// _header : frame header +// _payload : payload data [size: _payload_len x 1] +// _payload_len : payload data length +int flexframegen_assemble(flexframegen _q, + const unsigned char * _header, + const unsigned char * _payload, + unsigned int _payload_len); + +// write samples of assembled frame, two samples at a time, returning +// '1' when frame is complete, '0' otherwise. Zeros will be written +// to the buffer if the frame is not assembled +// _q : frame generator object +// _buffer : output buffer [size: _buffer_len x 1] +// _buffer_len : output buffer length +int flexframegen_write_samples(flexframegen _q, + liquid_float_complex * _buffer, + unsigned int _buffer_len); + +// frame synchronizer + +typedef struct flexframesync_s * flexframesync; + +// create flexframesync object +// _callback : callback function +// _userdata : user data pointer passed to callback function +flexframesync flexframesync_create(framesync_callback _callback, + void * _userdata); + +// destroy frame synchronizer +int flexframesync_destroy(flexframesync _q); + +// print frame synchronizer internal properties +int flexframesync_print(flexframesync _q); + +// reset frame synchronizer internal state +int flexframesync_reset(flexframesync _q); + +// has frame been detected? +int flexframesync_is_frame_open(flexframesync _q); + +// change length of user-defined region in header +int flexframesync_set_header_len(flexframesync _q, + unsigned int _len); + +// enable or disable soft decoding of header +int flexframesync_decode_header_soft(flexframesync _q, + int _soft); + +// enable or disable soft decoding of payload +int flexframesync_decode_payload_soft(flexframesync _q, + int _soft); + +// set properties for header section +int flexframesync_set_header_props(flexframesync _q, + flexframegenprops_s * _props); + +// push samples through frame synchronizer +// _q : frame synchronizer object +// _x : input samples [size: _n x 1] +// _n : number of input samples +int flexframesync_execute(flexframesync _q, + liquid_float_complex * _x, + unsigned int _n); + +// frame data statistics +int flexframesync_reset_framedatastats(flexframesync _q); +framedatastats_s flexframesync_get_framedatastats (flexframesync _q); + +// enable/disable debugging +int flexframesync_debug_enable(flexframesync _q); +int flexframesync_debug_disable(flexframesync _q); +int flexframesync_debug_print(flexframesync _q, + const char * _filename); + +// +// bpacket : binary packet suitable for data streaming +// + +// +// bpacket generator/encoder +// +typedef struct bpacketgen_s * bpacketgen; + +// create bpacketgen object +// _m : p/n sequence length (ignored) +// _dec_msg_len : decoded message length (original uncoded data) +// _crc : data validity check (e.g. cyclic redundancy check) +// _fec0 : inner forward error-correction code scheme +// _fec1 : outer forward error-correction code scheme +bpacketgen bpacketgen_create(unsigned int _m, + unsigned int _dec_msg_len, + int _crc, + int _fec0, + int _fec1); + +// re-create bpacketgen object from old object +// _q : old bpacketgen object +// _m : p/n sequence length (ignored) +// _dec_msg_len : decoded message length (original uncoded data) +// _crc : data validity check (e.g. cyclic redundancy check) +// _fec0 : inner forward error-correction code scheme +// _fec1 : outer forward error-correction code scheme +bpacketgen bpacketgen_recreate(bpacketgen _q, + unsigned int _m, + unsigned int _dec_msg_len, + int _crc, + int _fec0, + int _fec1); + +// destroy bpacketgen object, freeing all internally-allocated memory +void bpacketgen_destroy(bpacketgen _q); + +// print bpacketgen internals +void bpacketgen_print(bpacketgen _q); + +// return length of full packet +unsigned int bpacketgen_get_packet_len(bpacketgen _q); + +// encode packet +void bpacketgen_encode(bpacketgen _q, + unsigned char * _msg_dec, + unsigned char * _packet); + +// +// bpacket synchronizer/decoder +// +typedef struct bpacketsync_s * bpacketsync; +typedef int (*bpacketsync_callback)(unsigned char * _payload, + int _payload_valid, + unsigned int _payload_len, + framesyncstats_s _stats, + void * _userdata); +bpacketsync bpacketsync_create(unsigned int _m, + bpacketsync_callback _callback, + void * _userdata); +int bpacketsync_destroy(bpacketsync _q); +int bpacketsync_print(bpacketsync _q); +int bpacketsync_reset(bpacketsync _q); + +// run synchronizer on array of input bytes +// _q : bpacketsync object +// _bytes : input data array [size: _n x 1] +// _n : input array size +int bpacketsync_execute(bpacketsync _q, + unsigned char * _bytes, + unsigned int _n); + +// run synchronizer on input byte +// _q : bpacketsync object +// _byte : input byte +int bpacketsync_execute_byte(bpacketsync _q, + unsigned char _byte); + +// run synchronizer on input symbol +// _q : bpacketsync object +// _sym : input symbol with _bps significant bits +// _bps : number of bits in input symbol +int bpacketsync_execute_sym(bpacketsync _q, + unsigned char _sym, + unsigned int _bps); + +// execute one bit at a time +int bpacketsync_execute_bit(bpacketsync _q, + unsigned char _bit); + +// +// M-FSK frame generator +// + +typedef struct fskframegen_s * fskframegen; + +// create M-FSK frame generator +fskframegen fskframegen_create(); +int fskframegen_destroy (fskframegen _fg); +int fskframegen_print (fskframegen _fg); +int fskframegen_reset (fskframegen _fg); +int fskframegen_assemble(fskframegen _fg, + unsigned char * _header, + unsigned char * _payload, + unsigned int _payload_len, + crc_scheme _check, + fec_scheme _fec0, + fec_scheme _fec1); +unsigned int fskframegen_getframelen(fskframegen _q); +int fskframegen_write_samples(fskframegen _fg, + liquid_float_complex * _buf, + unsigned int _buf_len); + + +// +// M-FSK frame synchronizer +// + +typedef struct fskframesync_s * fskframesync; + +// create M-FSK frame synchronizer +// _callback : callback function +// _userdata : user data pointer passed to callback function +fskframesync fskframesync_create(framesync_callback _callback, + void * _userdata); +int fskframesync_destroy(fskframesync _q); +int fskframesync_print (fskframesync _q); +int fskframesync_reset (fskframesync _q); +int fskframesync_execute(fskframesync _q, + liquid_float_complex _x); +int fskframesync_execute_block(fskframesync _q, + liquid_float_complex * _x, + unsigned int _n); + +// debugging +int fskframesync_debug_enable (fskframesync _q); +int fskframesync_debug_disable(fskframesync _q); +int fskframesync_debug_export (fskframesync _q, const char * _filename); + + +// +// GMSK frame generator +// + +typedef struct gmskframegen_s * gmskframegen; + +// create GMSK frame generator +gmskframegen gmskframegen_create(); +int gmskframegen_destroy (gmskframegen _q); +int gmskframegen_is_assembled (gmskframegen _q); +int gmskframegen_print (gmskframegen _q); +int gmskframegen_set_header_len(gmskframegen _q, unsigned int _len); +int gmskframegen_reset (gmskframegen _q); +int gmskframegen_assemble (gmskframegen _q, + const unsigned char * _header, + const unsigned char * _payload, + unsigned int _payload_len, + crc_scheme _check, + fec_scheme _fec0, + fec_scheme _fec1); +unsigned int gmskframegen_getframelen(gmskframegen _q); +int gmskframegen_write_samples(gmskframegen _q, + liquid_float_complex * _y); + +// write samples of assembled frame +// _q : frame generator object +// _buf : output buffer [size: _buf_len x 1] +// _buf_len : output buffer length +int gmskframegen_write(gmskframegen _q, + liquid_float_complex * _buf, + unsigned int _buf_len); + + +// +// GMSK frame synchronizer +// + +typedef struct gmskframesync_s * gmskframesync; + +// create GMSK frame synchronizer +// _callback : callback function +// _userdata : user data pointer passed to callback function +gmskframesync gmskframesync_create(framesync_callback _callback, + void * _userdata); +int gmskframesync_destroy(gmskframesync _q); +int gmskframesync_print(gmskframesync _q); +int gmskframesync_set_header_len(gmskframesync _q, unsigned int _len); +int gmskframesync_reset(gmskframesync _q); +int gmskframesync_is_frame_open(gmskframesync _q); +int gmskframesync_execute(gmskframesync _q, + liquid_float_complex * _x, + unsigned int _n); + +// debugging +int gmskframesync_debug_enable(gmskframesync _q); +int gmskframesync_debug_disable(gmskframesync _q); +int gmskframesync_debug_print(gmskframesync _q, const char * _filename); + + +// +// DSSS frame generator +// + +typedef struct { + unsigned int check; + unsigned int fec0; + unsigned int fec1; +} dsssframegenprops_s; + +typedef struct dsssframegen_s * dsssframegen; + +dsssframegen dsssframegen_create(dsssframegenprops_s * _props); +int dsssframegen_destroy(dsssframegen _q); +int dsssframegen_reset(dsssframegen _q); +int dsssframegen_is_assembled(dsssframegen _q); +int dsssframegen_getprops(dsssframegen _q, dsssframegenprops_s * _props); +int dsssframegen_setprops(dsssframegen _q, dsssframegenprops_s * _props); +int dsssframegen_set_header_len(dsssframegen _q, unsigned int _len); +int dsssframegen_set_header_props(dsssframegen _q, + dsssframegenprops_s * _props); +unsigned int dsssframegen_getframelen(dsssframegen _q); + +// assemble a frame from an array of data +// _q : frame generator object +// _header : frame header +// _payload : payload data [size: _payload_len x 1] +// _payload_len : payload data length +int dsssframegen_assemble(dsssframegen _q, + const unsigned char * _header, + const unsigned char * _payload, + unsigned int _payload_len); + +int dsssframegen_write_samples(dsssframegen _q, + liquid_float_complex * _buffer, + unsigned int _buffer_len); + + +// +// DSSS frame synchronizer +// + +typedef struct dsssframesync_s * dsssframesync; + +dsssframesync dsssframesync_create(framesync_callback _callback, void * _userdata); +int dsssframesync_destroy (dsssframesync _q); +int dsssframesync_print (dsssframesync _q); +int dsssframesync_reset (dsssframesync _q); +int dsssframesync_is_frame_open (dsssframesync _q); +int dsssframesync_set_header_len (dsssframesync _q, unsigned int _len); +int dsssframesync_decode_header_soft (dsssframesync _q, int _soft); +int dsssframesync_decode_payload_soft (dsssframesync _q, int _soft); +int dsssframesync_set_header_props (dsssframesync _q, dsssframegenprops_s * _props); +int dsssframesync_execute (dsssframesync _q, liquid_float_complex * _x, unsigned int _n); +int dsssframesync_reset_framedatastats(dsssframesync _q); +int dsssframesync_debug_enable (dsssframesync _q); +int dsssframesync_debug_disable (dsssframesync _q); +int dsssframesync_debug_print (dsssframesync _q, const char * _filename); +framedatastats_s dsssframesync_get_framedatastats (dsssframesync _q); + +// +// OFDM flexframe generator +// + +// ofdm frame generator properties +typedef struct { + unsigned int check; // data validity check + unsigned int fec0; // forward error-correction scheme (inner) + unsigned int fec1; // forward error-correction scheme (outer) + unsigned int mod_scheme; // modulation scheme + //unsigned int block_size; // framing block size +} ofdmflexframegenprops_s; +int ofdmflexframegenprops_init_default(ofdmflexframegenprops_s * _props); + +typedef struct ofdmflexframegen_s * ofdmflexframegen; + +// create OFDM flexible framing generator object +// _M : number of subcarriers, >10 typical +// _cp_len : cyclic prefix length +// _taper_len : taper length (OFDM symbol overlap) +// _p : subcarrier allocation (null, pilot, data), [size: _M x 1] +// _fgprops : frame properties (modulation scheme, etc.) +ofdmflexframegen ofdmflexframegen_create(unsigned int _M, + unsigned int _cp_len, + unsigned int _taper_len, + unsigned char * _p, + ofdmflexframegenprops_s * _fgprops); + +// destroy ofdmflexframegen object +int ofdmflexframegen_destroy(ofdmflexframegen _q); + +// print parameters, properties, etc. +int ofdmflexframegen_print(ofdmflexframegen _q); + +// reset ofdmflexframegen object internals +int ofdmflexframegen_reset(ofdmflexframegen _q); + +// is frame assembled? +int ofdmflexframegen_is_assembled(ofdmflexframegen _q); + +// get properties +int ofdmflexframegen_getprops(ofdmflexframegen _q, + ofdmflexframegenprops_s * _props); + +// set properties +int ofdmflexframegen_setprops(ofdmflexframegen _q, + ofdmflexframegenprops_s * _props); + +// set user-defined header length +int ofdmflexframegen_set_header_len(ofdmflexframegen _q, + unsigned int _len); + +int ofdmflexframegen_set_header_props(ofdmflexframegen _q, + ofdmflexframegenprops_s * _props); + +// get length of frame (symbols) +// _q : OFDM frame generator object +unsigned int ofdmflexframegen_getframelen(ofdmflexframegen _q); + +// assemble a frame from an array of data (NULL pointers will use random data) +// _q : OFDM frame generator object +// _header : frame header [8 bytes] +// _payload : payload data [size: _payload_len x 1] +// _payload_len : payload data length +int ofdmflexframegen_assemble(ofdmflexframegen _q, + const unsigned char * _header, + const unsigned char * _payload, + unsigned int _payload_len); + +// write samples of assembled frame +// _q : OFDM frame generator object +// _buf : output buffer [size: _buf_len x 1] +// _buf_len : output buffer length +int ofdmflexframegen_write(ofdmflexframegen _q, + liquid_float_complex * _buf, + unsigned int _buf_len); + +// +// OFDM flex frame synchronizer +// + +typedef struct ofdmflexframesync_s * ofdmflexframesync; + +// create OFDM flexible framing synchronizer object +// _M : number of subcarriers +// _cp_len : cyclic prefix length +// _taper_len : taper length (OFDM symbol overlap) +// _p : subcarrier allocation (null, pilot, data), [size: _M x 1] +// _callback : user-defined callback function +// _userdata : user-defined data pointer +ofdmflexframesync ofdmflexframesync_create(unsigned int _M, + unsigned int _cp_len, + unsigned int _taper_len, + unsigned char * _p, + framesync_callback _callback, + void * _userdata); + +int ofdmflexframesync_destroy(ofdmflexframesync _q); +int ofdmflexframesync_print(ofdmflexframesync _q); +// set user-defined header length +int ofdmflexframesync_set_header_len(ofdmflexframesync _q, + unsigned int _len); + +int ofdmflexframesync_decode_header_soft(ofdmflexframesync _q, + int _soft); + +int ofdmflexframesync_decode_payload_soft(ofdmflexframesync _q, + int _soft); + +int ofdmflexframesync_set_header_props(ofdmflexframesync _q, + ofdmflexframegenprops_s * _props); + +int ofdmflexframesync_reset(ofdmflexframesync _q); +int ofdmflexframesync_is_frame_open(ofdmflexframesync _q); +int ofdmflexframesync_execute(ofdmflexframesync _q, + liquid_float_complex * _x, + unsigned int _n); + +// query the received signal strength indication +float ofdmflexframesync_get_rssi(ofdmflexframesync _q); + +// query the received carrier offset estimate +float ofdmflexframesync_get_cfo(ofdmflexframesync _q); + +// frame data statistics +int ofdmflexframesync_reset_framedatastats(ofdmflexframesync _q); +framedatastats_s ofdmflexframesync_get_framedatastats (ofdmflexframesync _q); + +// set the received carrier offset estimate +int ofdmflexframesync_set_cfo(ofdmflexframesync _q, float _cfo); + +// enable/disable debugging +int ofdmflexframesync_debug_enable(ofdmflexframesync _q); +int ofdmflexframesync_debug_disable(ofdmflexframesync _q); +int ofdmflexframesync_debug_print(ofdmflexframesync _q, + const char * _filename); + + + +// +// Binary P/N synchronizer +// +#define LIQUID_BSYNC_MANGLE_RRRF(name) LIQUID_CONCAT(bsync_rrrf,name) +#define LIQUID_BSYNC_MANGLE_CRCF(name) LIQUID_CONCAT(bsync_crcf,name) +#define LIQUID_BSYNC_MANGLE_CCCF(name) LIQUID_CONCAT(bsync_cccf,name) + +// Macro: +// BSYNC : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_BSYNC_DEFINE_API(BSYNC,TO,TC,TI) \ + \ +/* Binary P/N synchronizer */ \ +typedef struct BSYNC(_s) * BSYNC(); \ + \ +/* Create bsync object */ \ +/* _n : sequence length */ \ +/* _v : correlation sequence [size: _n x 1] */ \ +BSYNC() BSYNC(_create)(unsigned int _n, \ + TC * _v); \ + \ +/* Create binary synchronizer from m-sequence */ \ +/* _g : m-sequence generator polynomial */ \ +/* _k : samples/symbol (over-sampling factor) */ \ +BSYNC() BSYNC(_create_msequence)(unsigned int _g, \ + unsigned int _k); \ + \ +/* Destroy binary synchronizer object, freeing all internal memory */ \ +/* _q : bsync object */ \ +void BSYNC(_destroy)(BSYNC() _q); \ + \ +/* Print object internals to stdout */ \ +/* _q : bsync object */ \ +void BSYNC(_print)(BSYNC() _q); \ + \ +/* Correlate input signal against internal sequence */ \ +/* _q : bsync object */ \ +/* _x : input sample */ \ +/* _y : pointer to output sample */ \ +void BSYNC(_correlate)(BSYNC() _q, \ + TI _x, \ + TO * _y); \ + +LIQUID_BSYNC_DEFINE_API(LIQUID_BSYNC_MANGLE_RRRF, + float, + float, + float) + +LIQUID_BSYNC_DEFINE_API(LIQUID_BSYNC_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_BSYNC_DEFINE_API(LIQUID_BSYNC_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Pre-demodulation synchronizers (binary and otherwise) +// +#define LIQUID_PRESYNC_MANGLE_CCCF(name) LIQUID_CONCAT( presync_cccf,name) +#define LIQUID_BPRESYNC_MANGLE_CCCF(name) LIQUID_CONCAT(bpresync_cccf,name) + +// Macro: +// PRESYNC : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_PRESYNC_DEFINE_API(PRESYNC,TO,TC,TI) \ + \ +/* Pre-demodulation signal synchronizer */ \ +typedef struct PRESYNC(_s) * PRESYNC(); \ + \ +/* Create pre-demod synchronizer from external sequence */ \ +/* _v : baseband sequence, [size: _n x 1] */ \ +/* _n : baseband sequence length, _n > 0 */ \ +/* _dphi_max : maximum absolute frequency deviation for detection */ \ +/* _m : number of correlators, _m > 0 */ \ +PRESYNC() PRESYNC(_create)(TC * _v, \ + unsigned int _n, \ + float _dphi_max, \ + unsigned int _m); \ + \ +/* Destroy pre-demod synchronizer, freeing all internal memory */ \ +int PRESYNC(_destroy)(PRESYNC() _q); \ + \ +/* Print pre-demod synchronizer internal state */ \ +int PRESYNC(_print)(PRESYNC() _q); \ + \ +/* Reset pre-demod synchronizer internal state */ \ +int PRESYNC(_reset)(PRESYNC() _q); \ + \ +/* Push input sample into pre-demod synchronizer */ \ +/* _q : pre-demod synchronizer object */ \ +/* _x : input sample */ \ +int PRESYNC(_push)(PRESYNC() _q, \ + TI _x); \ + \ +/* Correlate original sequence with internal input buffer */ \ +/* _q : pre-demod synchronizer object */ \ +/* _rxy : output cross correlation */ \ +/* _dphi_hat : output frequency offset estimate */ \ +int PRESYNC(_execute)(PRESYNC() _q, \ + TO * _rxy, \ + float * _dphi_hat); \ + +// non-binary pre-demodulation synchronizer +LIQUID_PRESYNC_DEFINE_API(LIQUID_PRESYNC_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// binary pre-demodulation synchronizer +LIQUID_PRESYNC_DEFINE_API(LIQUID_BPRESYNC_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + +// +// Frame detector +// + +typedef struct qdetector_cccf_s * qdetector_cccf; + +// create detector with generic sequence +// _s : sample sequence +// _s_len : length of sample sequence +qdetector_cccf qdetector_cccf_create(liquid_float_complex * _s, + unsigned int _s_len); + +// create detector from sequence of symbols using internal linear interpolator +// _sequence : symbol sequence +// _sequence_len : length of symbol sequence +// _ftype : filter prototype (e.g. LIQUID_FIRFILT_RRC) +// _k : samples/symbol +// _m : filter delay +// _beta : excess bandwidth factor +qdetector_cccf qdetector_cccf_create_linear(liquid_float_complex * _sequence, + unsigned int _sequence_len, + int _ftype, + unsigned int _k, + unsigned int _m, + float _beta); + +// create detector from sequence of GMSK symbols +// _sequence : bit sequence +// _sequence_len : length of bit sequence +// _k : samples/symbol +// _m : filter delay +// _beta : excess bandwidth factor +qdetector_cccf qdetector_cccf_create_gmsk(unsigned char * _sequence, + unsigned int _sequence_len, + unsigned int _k, + unsigned int _m, + float _beta); + +// create detector from sequence of CP-FSK symbols (assuming one bit/symbol) +// _sequence : bit sequence +// _sequence_len : length of bit sequence +// _bps : bits per symbol, 0 < _bps <= 8 +// _h : modulation index, _h > 0 +// _k : samples/symbol +// _m : filter delay +// _beta : filter bandwidth parameter, _beta > 0 +// _type : filter type (e.g. LIQUID_CPFSK_SQUARE) +qdetector_cccf qdetector_cccf_create_cpfsk(unsigned char * _sequence, + unsigned int _sequence_len, + unsigned int _bps, + float _h, + unsigned int _k, + unsigned int _m, + float _beta, + int _type); + +int qdetector_cccf_destroy(qdetector_cccf _q); +int qdetector_cccf_print (qdetector_cccf _q); +int qdetector_cccf_reset (qdetector_cccf _q); + +// run detector, looking for sequence; return pointer to aligned, buffered samples +void * qdetector_cccf_execute(qdetector_cccf _q, + liquid_float_complex _x); + +// set detection threshold (should be between 0 and 1, good starting point is 0.5) +int qdetector_cccf_set_threshold(qdetector_cccf _q, + float _threshold); + +// set carrier offset search range +int qdetector_cccf_set_range(qdetector_cccf _q, + float _dphi_max); + +// access methods +unsigned int qdetector_cccf_get_seq_len (qdetector_cccf _q); // sequence length +const void * qdetector_cccf_get_sequence(qdetector_cccf _q); // pointer to sequence +unsigned int qdetector_cccf_get_buf_len (qdetector_cccf _q); // buffer length +float qdetector_cccf_get_rxy (qdetector_cccf _q); // correlator output +float qdetector_cccf_get_tau (qdetector_cccf _q); // fractional timing offset estimate +float qdetector_cccf_get_gamma (qdetector_cccf _q); // channel gain +float qdetector_cccf_get_dphi (qdetector_cccf _q); // carrier frequency offset estimate +float qdetector_cccf_get_phi (qdetector_cccf _q); // carrier phase offset estimate + +// +// Pre-demodulation detector +// + +typedef struct detector_cccf_s * detector_cccf; + +// create pre-demod detector +// _s : sequence +// _n : sequence length +// _threshold : detection threshold (default: 0.7) +// _dphi_max : maximum carrier offset +detector_cccf detector_cccf_create(liquid_float_complex * _s, + unsigned int _n, + float _threshold, + float _dphi_max); + +// destroy pre-demo detector object +void detector_cccf_destroy(detector_cccf _q); + +// print pre-demod detector internal state +void detector_cccf_print(detector_cccf _q); + +// reset pre-demod detector internal state +void detector_cccf_reset(detector_cccf _q); + +// Run sample through pre-demod detector's correlator. +// Returns '1' if signal was detected, '0' otherwise +// _q : pre-demod detector +// _x : input sample +// _tau_hat : fractional sample offset estimate (set when detected) +// _dphi_hat : carrier frequency offset estimate (set when detected) +// _gamma_hat : channel gain estimate (set when detected) +int detector_cccf_correlate(detector_cccf _q, + liquid_float_complex _x, + float * _tau_hat, + float * _dphi_hat, + float * _gamma_hat); + + +// +// symbol streaming for testing (no meaningful data, just symbols) +// +#define LIQUID_SYMSTREAM_MANGLE_CFLOAT(name) LIQUID_CONCAT(symstreamcf,name) + +#define LIQUID_SYMSTREAM_DEFINE_API(SYMSTREAM,TO) \ + \ +/* Symbol streaming generator object */ \ +typedef struct SYMSTREAM(_s) * SYMSTREAM(); \ + \ +/* Create symstream object with default parameters. */ \ +/* This is equivalent to invoking the create_linear() method */ \ +/* with _ftype=LIQUID_FIRFILT_ARKAISER, _k=2, _m=7, _beta=0.3, and */ \ +/* with _ms=LIQUID_MODEM_QPSK */ \ +SYMSTREAM() SYMSTREAM(_create)(void); \ + \ +/* Create symstream object with linear modulation */ \ +/* _ftype : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _k : samples per symbol, _k >= 2 */ \ +/* _m : filter delay (symbols), _m > 0 */ \ +/* _beta : filter excess bandwidth, 0 < _beta <= 1 */ \ +/* _ms : modulation scheme, e.g. LIQUID_MODEM_QPSK */ \ +SYMSTREAM() SYMSTREAM(_create_linear)(int _ftype, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + int _ms); \ + \ +/* Destroy symstream object, freeing all internal memory */ \ +int SYMSTREAM(_destroy)(SYMSTREAM() _q); \ + \ +/* Print symstream object's parameters */ \ +int SYMSTREAM(_print)(SYMSTREAM() _q); \ + \ +/* Reset symstream internal state */ \ +int SYMSTREAM(_reset)(SYMSTREAM() _q); \ + \ +/* Set internal linear modulation scheme, leaving the filter parameters */ \ +/* (interpolator) unmodified */ \ +int SYMSTREAM(_set_scheme)(SYMSTREAM() _q, \ + int _ms); \ + \ +/* Get internal linear modulation scheme */ \ +int SYMSTREAM(_get_scheme)(SYMSTREAM() _q); \ + \ +/* Set internal linear gain (before interpolation) */ \ +int SYMSTREAM(_set_gain)(SYMSTREAM() _q, \ + float _gain); \ + \ +/* Get internal linear gain (before interpolation) */ \ +float SYMSTREAM(_get_gain)(SYMSTREAM() _q); \ + \ +/* Write block of samples to output buffer */ \ +/* _q : synchronizer object */ \ +/* _buf : output buffer [size: _buf_len x 1] */ \ +/* _buf_len: output buffer size */ \ +int SYMSTREAM(_write_samples)(SYMSTREAM() _q, \ + TO * _buf, \ + unsigned int _buf_len); \ + +LIQUID_SYMSTREAM_DEFINE_API(LIQUID_SYMSTREAM_MANGLE_CFLOAT, liquid_float_complex) + + + +// +// multi-signal source for testing (no meaningful data, just signals) +// + +#define LIQUID_MSOURCE_MANGLE_CFLOAT(name) LIQUID_CONCAT(msourcecf,name) + +#define LIQUID_MSOURCE_DEFINE_API(MSOURCE,TO) \ + \ +/* Multi-signal source generator object */ \ +typedef struct MSOURCE(_s) * MSOURCE(); \ + \ +/* Create msource object by specifying channelizer parameters */ \ +/* _M : number of channels in analysis channelizer object */ \ +/* _m : prototype channelizer filter semi-length */ \ +/* _As : prototype channelizer filter stop-band suppression (dB) */ \ +MSOURCE() MSOURCE(_create)(unsigned int _M, \ + unsigned int _m, \ + float _As); \ + \ +/* Create default msource object with default parameters: */ \ +/* M = 1200, m = 4, As = 60 */ \ +MSOURCE() MSOURCE(_create_default)(void); \ + \ +/* Destroy msource object */ \ +int MSOURCE(_destroy)(MSOURCE() _q); \ + \ +/* Print msource object */ \ +int MSOURCE(_print)(MSOURCE() _q); \ + \ +/* Reset msource object */ \ +int MSOURCE(_reset)(MSOURCE() _q); \ + \ +/* user-defined callback for generating samples */ \ +typedef int (*MSOURCE(_callback))(void * _userdata, \ + TO * _v, \ + unsigned int _n); \ + \ +/* Add user-defined signal generator */ \ +int MSOURCE(_add_user)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain, \ + void * _userdata, \ + MSOURCE(_callback) _callback); \ + \ +/* Add tone to signal generator, returning id of signal */ \ +int MSOURCE(_add_tone)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain); \ + \ +/* Add chirp to signal generator, returning id of signal */ \ +/* _q : multi-signal source object */ \ +/* _duration : duration of chirp [samples] */ \ +/* _negate : negate frequency direction */ \ +/* _single : run single chirp? or repeatedly */ \ +int MSOURCE(_add_chirp)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain, \ + float _duration, \ + int _negate, \ + int _repeat); \ + \ +/* Add noise source to signal generator, returning id of signal */ \ +/* _q : multi-signal source object */ \ +/* _fc : ... */ \ +/* _bw : ... */ \ +/* _nstd : ... */ \ +int MSOURCE(_add_noise)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain); \ + \ +/* Add modem signal source, returning id of signal */ \ +/* _q : multi-signal source object */ \ +/* _ms : modulation scheme, e.g. LIQUID_MODEM_QPSK */ \ +/* _m : filter delay (symbols), _m > 0 */ \ +/* _beta : filter excess bandwidth, 0 < _beta <= 1 */ \ +int MSOURCE(_add_modem)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain, \ + int _ms, \ + unsigned int _m, \ + float _beta); \ + \ +/* Add frequency-shift keying modem signal source, returning id of */ \ +/* signal */ \ +/* _q : multi-signal source object */ \ +/* _m : bits per symbol, _bps > 0 */ \ +/* _k : samples/symbol, _k >= 2^_m */ \ +int MSOURCE(_add_fsk)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain, \ + unsigned int _m, \ + unsigned int _k); \ + \ +/* Add GMSK modem signal source, returning id of signal */ \ +/* _q : multi-signal source object */ \ +/* _m : filter delay (symbols), _m > 0 */ \ +/* _bt : filter bandwidth-time factor, 0 < _bt <= 1 */ \ +int MSOURCE(_add_gmsk)(MSOURCE() _q, \ + float _fc, \ + float _bw, \ + float _gain, \ + unsigned int _m, \ + float _bt); \ + \ +/* Remove signal with a particular id, returning 0 upon success */ \ +/* _q : multi-signal source object */ \ +/* _id : signal source id */ \ +int MSOURCE(_remove)(MSOURCE() _q, \ + int _id); \ + \ +/* Enable signal source with a particular id */ \ +int MSOURCE(_enable)(MSOURCE() _q, \ + int _id); \ + \ +/* Disable signal source with a particular id */ \ +int MSOURCE(_disable)(MSOURCE() _q, \ + int _id); \ + \ +/* Set gain in decibels on signal */ \ +/* _q : msource object */ \ +/* _id : source id */ \ +/* _gain : signal gain [dB] */ \ +int MSOURCE(_set_gain)(MSOURCE() _q, \ + int _id, \ + float _gain); \ + \ +/* Get gain in decibels on signal */ \ +/* _q : msource object */ \ +/* _id : source id */ \ +/* _gain : signal gain output [dB] */ \ +int MSOURCE(_get_gain)(MSOURCE() _q, \ + int _id, \ + float * _gain); \ + \ +/* Get number of samples generated by the object so far */ \ +/* _q : msource object */ \ +/* _return : number of time-domain samples generated */ \ +unsigned long long int MSOURCE(_get_num_samples)(MSOURCE() _q); \ + \ +/* Set carrier offset to signal */ \ +/* _q : msource object */ \ +/* _id : source id */ \ +/* _fc : normalized carrier frequency offset, -0.5 <= _fc <= 0.5 */ \ +int MSOURCE(_set_frequency)(MSOURCE() _q, \ + int _id, \ + float _dphi); \ + \ +/* Get carrier offset to signal */ \ +/* _q : msource object */ \ +/* _id : source id */ \ +/* _fc : normalized carrier frequency offset */ \ +int MSOURCE(_get_frequency)(MSOURCE() _q, \ + int _id, \ + float * _dphi); \ + \ +/* Write block of samples to output buffer */ \ +/* _q : synchronizer object */ \ +/* _buf : output buffer, [size: _buf_len x 1] */ \ +/* _buf_len: output buffer size */ \ +int MSOURCE(_write_samples)(MSOURCE() _q, \ + TO * _buf, \ + unsigned int _buf_len); \ + +LIQUID_MSOURCE_DEFINE_API(LIQUID_MSOURCE_MANGLE_CFLOAT, liquid_float_complex) + + + + +// +// Symbol tracking: AGC > symsync > EQ > carrier recovery +// +#define LIQUID_SYMTRACK_MANGLE_RRRF(name) LIQUID_CONCAT(symtrack_rrrf,name) +#define LIQUID_SYMTRACK_MANGLE_CCCF(name) LIQUID_CONCAT(symtrack_cccf,name) + +// large macro +// SYMTRACK : name-mangling macro +// T : data type, primitive +// TO : data type, output +// TC : data type, coefficients +// TI : data type, input +#define LIQUID_SYMTRACK_DEFINE_API(SYMTRACK,T,TO,TC,TI) \ + \ +/* Symbol synchronizer and tracking object */ \ +typedef struct SYMTRACK(_s) * SYMTRACK(); \ + \ +/* Create symtrack object, specifying parameters for operation */ \ +/* _ftype : filter type (e.g. LIQUID_FIRFILT_RRC) */ \ +/* _k : samples per symbol, _k >= 2 */ \ +/* _m : filter delay [symbols], _m > 0 */ \ +/* _beta : excess bandwidth factor, 0 <= _beta <= 1 */ \ +/* _ms : modulation scheme, _ms(LIQUID_MODEM_BPSK) */ \ +SYMTRACK() SYMTRACK(_create)(int _ftype, \ + unsigned int _k, \ + unsigned int _m, \ + float _beta, \ + int _ms); \ + \ +/* Create symtrack object using default parameters. */ \ +/* The default parameters are */ \ +/* ftype = LIQUID_FIRFILT_ARKAISER (filter type), */ \ +/* k = 2 (samples per symbol), */ \ +/* m = 7 (filter delay), */ \ +/* beta = 0.3 (excess bandwidth factor), and */ \ +/* ms = LIQUID_MODEM_QPSK (modulation scheme) */ \ +SYMTRACK() SYMTRACK(_create_default)(); \ + \ +/* Destroy symtrack object, freeing all internal memory */ \ +int SYMTRACK(_destroy)(SYMTRACK() _q); \ + \ +/* Print symtrack object's parameters */ \ +int SYMTRACK(_print)(SYMTRACK() _q); \ + \ +/* Reset symtrack internal state */ \ +int SYMTRACK(_reset)(SYMTRACK() _q); \ + \ +/* Set symtrack modulation scheme */ \ +/* _q : symtrack object */ \ +/* _ms : modulation scheme, _ms(LIQUID_MODEM_BPSK) */ \ +int SYMTRACK(_set_modscheme)(SYMTRACK() _q, \ + int _ms); \ + \ +/* Set symtrack internal bandwidth */ \ +/* _q : symtrack object */ \ +/* _bw : tracking bandwidth, _bw > 0 */ \ +int SYMTRACK(_set_bandwidth)(SYMTRACK() _q, \ + float _bw); \ + \ +/* Adjust internal NCO by requested phase */ \ +/* _q : symtrack object */ \ +/* _dphi : NCO phase adjustment [radians] */ \ +int SYMTRACK(_adjust_phase)(SYMTRACK() _q, \ + T _dphi); \ + \ +/* Set symtrack equalization strategy to constant modulus (default) */ \ +int SYMTRACK(_set_eq_cm)(SYMTRACK() _q); \ + \ +/* Set symtrack equalization strategy to decision directed */ \ +int SYMTRACK(_set_eq_dd)(SYMTRACK() _q); \ + \ +/* Disable symtrack equalization */ \ +int SYMTRACK(_set_eq_off)(SYMTRACK() _q); \ + \ +/* Execute synchronizer on single input sample */ \ +/* _q : synchronizer object */ \ +/* _x : input data sample */ \ +/* _y : output data array, [size: 2 x 1] */ \ +/* _ny : number of samples written to output buffer (0, 1, or 2) */ \ +int SYMTRACK(_execute)(SYMTRACK() _q, \ + TI _x, \ + TO * _y, \ + unsigned int * _ny); \ + \ +/* execute synchronizer on input data array */ \ +/* _q : synchronizer object */ \ +/* _x : input data array */ \ +/* _nx : number of input samples */ \ +/* _y : output data array, [size: 2 _nx x 1] */ \ +/* _ny : number of samples written to output buffer */ \ +int SYMTRACK(_execute_block)(SYMTRACK() _q, \ + TI * _x, \ + unsigned int _nx, \ + TO * _y, \ + unsigned int * _ny); \ + +LIQUID_SYMTRACK_DEFINE_API(LIQUID_SYMTRACK_MANGLE_RRRF, + float, + float, + float, + float) + +LIQUID_SYMTRACK_DEFINE_API(LIQUID_SYMTRACK_MANGLE_CCCF, + float, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + + +// +// MODULE : math +// + +// ln( Gamma(z) ) +float liquid_lngammaf(float _z); + +// Gamma(z) +float liquid_gammaf(float _z); + +// ln( gamma(z,alpha) ) : lower incomplete gamma function +float liquid_lnlowergammaf(float _z, float _alpha); + +// ln( Gamma(z,alpha) ) : upper incomplete gamma function +float liquid_lnuppergammaf(float _z, float _alpha); + +// gamma(z,alpha) : lower incomplete gamma function +float liquid_lowergammaf(float _z, float _alpha); + +// Gamma(z,alpha) : upper incomplete gamma function +float liquid_uppergammaf(float _z, float _alpha); + +// n! +float liquid_factorialf(unsigned int _n); + + + +// ln(I_v(z)) : log Modified Bessel function of the first kind +float liquid_lnbesselif(float _nu, float _z); + +// I_v(z) : Modified Bessel function of the first kind +float liquid_besselif(float _nu, float _z); + +// I_0(z) : Modified Bessel function of the first kind (order zero) +float liquid_besseli0f(float _z); + +// J_v(z) : Bessel function of the first kind +float liquid_besseljf(float _nu, float _z); + +// J_0(z) : Bessel function of the first kind (order zero) +float liquid_besselj0f(float _z); + + +// Q function +float liquid_Qf(float _z); + +// Marcum Q-function +float liquid_MarcumQf(int _M, + float _alpha, + float _beta); + +// Marcum Q-function (M=1) +float liquid_MarcumQ1f(float _alpha, + float _beta); + +// sin(pi x) / (pi x) +float sincf(float _x); + +// next power of 2 : y = ceil(log2(_x)) +unsigned int liquid_nextpow2(unsigned int _x); + +// (n choose k) = n! / ( k! (n-k)! ) +float liquid_nchoosek(unsigned int _n, unsigned int _k); + +// +// Windowing functions +// + +// number of window functions available, including "unknown" type +#define LIQUID_WINDOW_NUM_FUNCTIONS (10) + +// prototypes +typedef enum { + LIQUID_WINDOW_UNKNOWN=0, // unknown/unsupported scheme + + LIQUID_WINDOW_HAMMING, // Hamming + LIQUID_WINDOW_HANN, // Hann + LIQUID_WINDOW_BLACKMANHARRIS, // Blackman-harris (4-term) + LIQUID_WINDOW_BLACKMANHARRIS7, // Blackman-harris (7-term) + LIQUID_WINDOW_KAISER, // Kaiser (beta factor unspecified) + LIQUID_WINDOW_FLATTOP, // flat top (includes negative values) + LIQUID_WINDOW_TRIANGULAR, // triangular + LIQUID_WINDOW_RCOSTAPER, // raised-cosine taper (taper size unspecified) + LIQUID_WINDOW_KBD, // Kaiser-Bessel derived window (beta factor unspecified) +} liquid_window_type; + +// pretty names for window +extern const char * liquid_window_str[LIQUID_WINDOW_NUM_FUNCTIONS][2]; + +// Print compact list of existing and available windowing functions +void liquid_print_windows(); + +// returns window type based on input string +liquid_window_type liquid_getopt_str2window(const char * _str); + +// generic window function given type +// _type : window type, e.g. LIQUID_WINDOW_KAISER +// _i : window index, _i in [0,_wlen-1] +// _wlen : length of window +// _arg : window-specific argument, if required +float liquid_windowf(liquid_window_type _type, + unsigned int _i, + unsigned int _wlen, + float _arg); + +// Kaiser window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +// _beta : Kaiser-Bessel window shape parameter +float liquid_kaiser(unsigned int _i, + unsigned int _wlen, + float _beta); + +// Hamming window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +float liquid_hamming(unsigned int _i, + unsigned int _wlen); + +// Hann window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +float liquid_hann(unsigned int _i, + unsigned int _wlen); + +// Blackman-harris window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +float liquid_blackmanharris(unsigned int _i, + unsigned int _wlen); + +// 7th order Blackman-harris window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +float liquid_blackmanharris7(unsigned int _i, + unsigned int _wlen); + +// Flat-top window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +float liquid_flattop(unsigned int _i, + unsigned int _wlen); + +// Triangular window +// _i : window index, _i in [0,_wlen-1] +// _wlen : full window length +// _L : triangle length, _L in {_wlen-1, _wlen, _wlen+1} +float liquid_triangular(unsigned int _i, + unsigned int _wlen, + unsigned int _L); + +// raised-cosine tapering window +// _i : window index +// _wlen : full window length +// _t : taper length, _t in [0,_wlen/2] +float liquid_rcostaper_window(unsigned int _i, + unsigned int _wlen, + unsigned int _t); + +// Kaiser-Bessel derived window (single sample) +// _i : window index, _i in [0,_wlen-1] +// _wlen : length of filter (must be even) +// _beta : Kaiser window parameter (_beta > 0) +float liquid_kbd(unsigned int _i, + unsigned int _wlen, + float _beta); + +// Kaiser-Bessel derived window (full window) +// _wlen : full window length (must be even) +// _beta : Kaiser window parameter (_beta > 0) +// _w : window output buffer, [size: _wlen x 1] +int liquid_kbd_window(unsigned int _wlen, + float _beta, + float * _w); + + +// polynomials + + +#define LIQUID_POLY_MANGLE_DOUBLE(name) LIQUID_CONCAT(poly, name) +#define LIQUID_POLY_MANGLE_FLOAT(name) LIQUID_CONCAT(polyf, name) + +#define LIQUID_POLY_MANGLE_CDOUBLE(name) LIQUID_CONCAT(polyc, name) +#define LIQUID_POLY_MANGLE_CFLOAT(name) LIQUID_CONCAT(polycf, name) + +// large macro +// POLY : name-mangling macro +// T : data type +// TC : data type (complex) +#define LIQUID_POLY_DEFINE_API(POLY,T,TC) \ + \ +/* Evaluate polynomial _p at value _x */ \ +/* _p : polynomial coefficients [size _k x 1] */ \ +/* _k : polynomial coefficients length, order is _k - 1 */ \ +/* _x : input to evaluate polynomial */ \ +T POLY(_val)(T * _p, \ + unsigned int _k, \ + T _x); \ + \ +/* Perform least-squares polynomial fit on data set */ \ +/* _x : x-value sample set [size: _n x 1] */ \ +/* _y : y-value sample set [size: _n x 1] */ \ +/* _n : number of samples in _x and _y */ \ +/* _p : polynomial coefficients output [size _k x 1] */ \ +/* _k : polynomial coefficients length, order is _k - 1 */ \ +int POLY(_fit)(T * _x, \ + T * _y, \ + unsigned int _n, \ + T * _p, \ + unsigned int _k); \ + \ +/* Perform Lagrange polynomial exact fit on data set */ \ +/* _x : x-value sample set, size [_n x 1] */ \ +/* _y : y-value sample set, size [_n x 1] */ \ +/* _n : number of samples in _x and _y */ \ +/* _p : polynomial coefficients output [size _n x 1] */ \ +int POLY(_fit_lagrange)(T * _x, \ + T * _y, \ + unsigned int _n, \ + T * _p); \ + \ +/* Perform Lagrange polynomial interpolation on data set without */ \ +/* computing coefficients as an intermediate step. */ \ +/* _x : x-value sample set [size: _n x 1] */ \ +/* _y : y-value sample set [size: _n x 1] */ \ +/* _n : number of samples in _x and _y */ \ +/* _x0 : x-value to evaluate and compute interpolant */ \ +T POLY(_interp_lagrange)(T * _x, \ + T * _y, \ + unsigned int _n, \ + T _x0); \ + \ +/* Compute Lagrange polynomial fit in the barycentric form. */ \ +/* _x : x-value sample set, size [_n x 1] */ \ +/* _n : number of samples in _x */ \ +/* _w : barycentric weights normalized so _w[0]=1, size [_n x 1] */ \ +int POLY(_fit_lagrange_barycentric)(T * _x, \ + unsigned int _n, \ + T * _w); \ + \ +/* Perform Lagrange polynomial interpolation using the barycentric form */ \ +/* of the weights. */ \ +/* _x : x-value sample set [size: _n x 1] */ \ +/* _y : y-value sample set [size: _n x 1] */ \ +/* _w : barycentric weights [size: _n x 1] */ \ +/* _x0 : x-value to evaluate and compute interpolant */ \ +/* _n : number of samples in _x, _y, and _w */ \ +T POLY(_val_lagrange_barycentric)(T * _x, \ + T * _y, \ + T * _w, \ + T _x0, \ + unsigned int _n); \ + \ +/* Perform binomial expansion on the polynomial */ \ +/* \( P_n(x) = (1+x)^n \) */ \ +/* as */ \ +/* \( P_n(x) = p[0] + p[1]x + p[2]x^2 + ... + p[n]x^n \) */ \ +/* NOTE: _p has order n (coefficients has length n+1) */ \ +/* _n : polynomial order */ \ +/* _p : polynomial coefficients [size: _n+1 x 1] */ \ +int POLY(_expandbinomial)(unsigned int _n, \ + T * _p); \ + \ +/* Perform positive/negative binomial expansion on the polynomial */ \ +/* \( P_n(x) = (1+x)^m (1-x)^k \) */ \ +/* as */ \ +/* \( P_n(x) = p[0] + p[1]x + p[2]x^2 + ... + p[n]x^n \) */ \ +/* NOTE: _p has order n=m+k (array is length n+1) */ \ +/* _m : number of '1+x' terms */ \ +/* _k : number of '1-x' terms */ \ +/* _p : polynomial coefficients [size: _m+_k+1 x 1] */ \ +int POLY(_expandbinomial_pm)(unsigned int _m, \ + unsigned int _k, \ + T * _p); \ + \ +/* Perform root expansion on the polynomial */ \ +/* \( P_n(x) = (x-r[0]) (x-r[1]) ... (x-r[n-1]) \) */ \ +/* as */ \ +/* \( P_n(x) = p[0] + p[1]x + ... + p[n]x^n \) */ \ +/* where \( r[0],r[1],...,r[n-1]\) are the roots of \( P_n(x) \). */ \ +/* NOTE: _p has order _n (array is length _n+1) */ \ +/* _r : roots of polynomial [size: _n x 1] */ \ +/* _n : number of roots in polynomial */ \ +/* _p : polynomial coefficients [size: _n+1 x 1] */ \ +int POLY(_expandroots)(T * _r, \ + unsigned int _n, \ + T * _p); \ + \ +/* Perform root expansion on the polynomial */ \ +/* \( P_n(x) = (xb[0]-a[0]) (xb[1]-a[1])...(xb[n-1]-a[n-1]) \) */ \ +/* as */ \ +/* \( P_n(x) = p[0] + p[1]x + ... + p[n]x^n \) */ \ +/* NOTE: _p has order _n (array is length _n+1) */ \ +/* _a : subtractant of polynomial rotos [size: _n x 1] */ \ +/* _b : multiplicant of polynomial roots [size: _n x 1] */ \ +/* _n : number of roots in polynomial */ \ +/* _p : polynomial coefficients [size: _n+1 x 1] */ \ +int POLY(_expandroots2)(T * _a, \ + T * _b, \ + unsigned int _n, \ + T * _p); \ + \ +/* Find the complex roots of a polynomial. */ \ +/* _p : polynomial coefficients [size: _n x 1] */ \ +/* _k : polynomial length */ \ +/* _roots : resulting complex roots [size: _k-1 x 1] */ \ +int POLY(_findroots)(T * _poly, \ + unsigned int _n, \ + TC * _roots); \ + \ +/* Find the complex roots of the polynomial using the Durand-Kerner */ \ +/* method */ \ +/* _p : polynomial coefficients [size: _n x 1] */ \ +/* _k : polynomial length */ \ +/* _roots : resulting complex roots [size: _k-1 x 1] */ \ +int POLY(_findroots_durandkerner)(T * _p, \ + unsigned int _k, \ + TC * _roots); \ + \ +/* Find the complex roots of the polynomial using Bairstow's method. */ \ +/* _p : polynomial coefficients [size: _n x 1] */ \ +/* _k : polynomial length */ \ +/* _roots : resulting complex roots [size: _k-1 x 1] */ \ +int POLY(_findroots_bairstow)(T * _p, \ + unsigned int _k, \ + TC * _roots); \ + \ +/* Expand the multiplication of two polynomials */ \ +/* \( ( a[0] + a[1]x + a[2]x^2 + ...) (b[0] + b[1]x + b[]x^2 + ...) \) */ \ +/* as */ \ +/* \( c[0] + c[1]x + c[2]x^2 + ... + c[n]x^n \) */ \ +/* where order(c) = order(a) + order(b) + 1 */ \ +/* and therefore length(c) = length(a) + length(b) - 1 */ \ +/* _a : 1st polynomial coefficients (length is _order_a+1) */ \ +/* _order_a : 1st polynomial order */ \ +/* _b : 2nd polynomial coefficients (length is _order_b+1) */ \ +/* _order_b : 2nd polynomial order */ \ +/* _c : output polynomial [size: _order_a+_order_b+1 x 1] */ \ +int POLY(_mul)(T * _a, \ + unsigned int _order_a, \ + T * _b, \ + unsigned int _order_b, \ + T * _c); \ + +LIQUID_POLY_DEFINE_API(LIQUID_POLY_MANGLE_DOUBLE, + double, + liquid_double_complex) + +LIQUID_POLY_DEFINE_API(LIQUID_POLY_MANGLE_FLOAT, + float, + liquid_float_complex) + +LIQUID_POLY_DEFINE_API(LIQUID_POLY_MANGLE_CDOUBLE, + liquid_double_complex, + liquid_double_complex) + +LIQUID_POLY_DEFINE_API(LIQUID_POLY_MANGLE_CFLOAT, + liquid_float_complex, + liquid_float_complex) + +#if 0 +// expands the polynomial: (1+x)^n +void poly_binomial_expand(unsigned int _n, int * _c); + +// expands the polynomial: (1+x)^k * (1-x)^(n-k) +void poly_binomial_expand_pm(unsigned int _n, + unsigned int _k, + int * _c); +#endif + +// +// modular arithmetic, etc. +// + +// maximum number of factors +#define LIQUID_MAX_FACTORS (40) + +// is number prime? +int liquid_is_prime(unsigned int _n); + +// compute number's prime factors +// _n : number to factor +// _factors : pre-allocated array of factors [size: LIQUID_MAX_FACTORS x 1] +// _num_factors: number of factors found, sorted ascending +int liquid_factor(unsigned int _n, + unsigned int * _factors, + unsigned int * _num_factors); + +// compute number's unique prime factors +// _n : number to factor +// _factors : pre-allocated array of factors [size: LIQUID_MAX_FACTORS x 1] +// _num_factors: number of unique factors found, sorted ascending +int liquid_unique_factor(unsigned int _n, + unsigned int * _factors, + unsigned int * _num_factors); + +// compute greatest common divisor between to numbers P and Q +unsigned int liquid_gcd(unsigned int _P, + unsigned int _Q); + +// compute c = base^exp (mod n) +unsigned int liquid_modpow(unsigned int _base, + unsigned int _exp, + unsigned int _n); + +// find smallest primitive root of _n +unsigned int liquid_primitive_root(unsigned int _n); + +// find smallest primitive root of _n, assuming _n is prime +unsigned int liquid_primitive_root_prime(unsigned int _n); + +// Euler's totient function +unsigned int liquid_totient(unsigned int _n); + + +// +// MODULE : matrix +// + +#define LIQUID_MATRIX_MANGLE_DOUBLE(name) LIQUID_CONCAT(matrix, name) +#define LIQUID_MATRIX_MANGLE_FLOAT(name) LIQUID_CONCAT(matrixf, name) + +#define LIQUID_MATRIX_MANGLE_CDOUBLE(name) LIQUID_CONCAT(matrixc, name) +#define LIQUID_MATRIX_MANGLE_CFLOAT(name) LIQUID_CONCAT(matrixcf, name) + +// large macro +// MATRIX : name-mangling macro +// T : data type +#define LIQUID_MATRIX_DEFINE_API(MATRIX,T) \ + \ +/* Print array as matrix to stdout */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _r : rows in matrix */ \ +/* _c : columns in matrix */ \ +int MATRIX(_print)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Perform point-wise addition between two matrices \(\vec{X}\) */ \ +/* and \(\vec{Y}\), saving the result in the output matrix \(\vec{Z}\). */ \ +/* That is, \(\vec{Z}_{i,j}=\vec{X}_{i,j}+\vec{Y}_{i,j} \), */ \ +/* \( \forall_{i \in r} \) and \( \forall_{j \in c} \) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _y : input matrix, [size: _r x _c] */ \ +/* _z : output matrix, [size: _r x _c] */ \ +/* _r : number of rows in each matrix */ \ +/* _c : number of columns in each matrix */ \ +int MATRIX(_add)(T * _x, \ + T * _y, \ + T * _z, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Perform point-wise subtraction between two matrices \(\vec{X}\) */ \ +/* and \(\vec{Y}\), saving the result in the output matrix \(\vec{Z}\) */ \ +/* That is, \(\vec{Z}_{i,j}=\vec{X}_{i,j}-\vec{Y}_{i,j} \), */ \ +/* \( \forall_{i \in r} \) and \( \forall_{j \in c} \) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _y : input matrix, [size: _r x _c] */ \ +/* _z : output matrix, [size: _r x _c] */ \ +/* _r : number of rows in each matrix */ \ +/* _c : number of columns in each matrix */ \ +int MATRIX(_sub)(T * _x, \ + T * _y, \ + T * _z, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Perform point-wise multiplication between two matrices \(\vec{X}\) */ \ +/* and \(\vec{Y}\), saving the result in the output matrix \(\vec{Z}\) */ \ +/* That is, \(\vec{Z}_{i,j}=\vec{X}_{i,j} \vec{Y}_{i,j} \), */ \ +/* \( \forall_{i \in r} \) and \( \forall_{j \in c} \) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _y : input matrix, [size: _r x _c] */ \ +/* _z : output matrix, [size: _r x _c] */ \ +/* _r : number of rows in each matrix */ \ +/* _c : number of columns in each matrix */ \ +int MATRIX(_pmul)(T * _x, \ + T * _y, \ + T * _z, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Perform point-wise division between two matrices \(\vec{X}\) */ \ +/* and \(\vec{Y}\), saving the result in the output matrix \(\vec{Z}\) */ \ +/* That is, \(\vec{Z}_{i,j}=\vec{X}_{i,j}/\vec{Y}_{i,j} \), */ \ +/* \( \forall_{i \in r} \) and \( \forall_{j \in c} \) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _y : input matrix, [size: _r x _c] */ \ +/* _z : output matrix, [size: _r x _c] */ \ +/* _r : number of rows in each matrix */ \ +/* _c : number of columns in each matrix */ \ +int MATRIX(_pdiv)(T * _x, \ + T * _y, \ + T * _z, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Multiply two matrices \(\vec{X}\) and \(\vec{Y}\), storing the */ \ +/* result in \(\vec{Z}\). */ \ +/* NOTE: _rz = _rx, _cz = _cy, and _cx = _ry */ \ +/* _x : input matrix, [size: _rx x _cx] */ \ +/* _rx : number of rows in _x */ \ +/* _cx : number of columns in _x */ \ +/* _y : input matrix, [size: _ry x _cy] */ \ +/* _ry : number of rows in _y */ \ +/* _cy : number of columns in _y */ \ +/* _z : output matrix, [size: _rz x _cz] */ \ +/* _rz : number of rows in _z */ \ +/* _cz : number of columns in _z */ \ +int MATRIX(_mul)(T * _x, unsigned int _rx, unsigned int _cx, \ + T * _y, unsigned int _ry, unsigned int _cy, \ + T * _z, unsigned int _rz, unsigned int _cz); \ + \ +/* Solve \(\vec{X} = \vec{Y} \vec{Z}\) for \(\vec{Z}\) for square */ \ +/* matrices of size \(n\) */ \ +/* _x : input matrix, [size: _n x _n] */ \ +/* _y : input matrix, [size: _n x _n] */ \ +/* _z : output matrix, [size: _n x _n] */ \ +/* _n : number of rows and columns in each matrix */ \ +int MATRIX(_div)(T * _x, \ + T * _y, \ + T * _z, \ + unsigned int _n); \ + \ +/* Compute the determinant of a square matrix \(\vec{X}\) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +T MATRIX(_det)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Compute the in-place transpose of the matrix \(\vec{X}\) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_trans)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Compute the in-place Hermitian transpose of the matrix \(\vec{X}\) */ \ +/* _x : input matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_hermitian)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Compute \(\vec{X}\vec{X}^T\) on a \(m \times n\) matrix. */ \ +/* The result is a \(m \times m\) matrix. */ \ +/* _x : input matrix, [size: _m x _n] */ \ +/* _m : input rows */ \ +/* _n : input columns */ \ +/* _xxT : output matrix, [size: _m x _m] */ \ +int MATRIX(_mul_transpose)(T * _x, \ + unsigned int _m, \ + unsigned int _n, \ + T * _xxT); \ + \ +/* Compute \(\vec{X}^T\vec{X}\) on a \(m \times n\) matrix. */ \ +/* The result is a \(n \times n\) matrix. */ \ +/* _x : input matrix, [size: _m x _n] */ \ +/* _m : input rows */ \ +/* _n : input columns */ \ +/* _xTx : output matrix, [size: _n x _n] */ \ +int MATRIX(_transpose_mul)(T * _x, \ + unsigned int _m, \ + unsigned int _n, \ + T * _xTx); \ + \ +/* Compute \(\vec{X}\vec{X}^H\) on a \(m \times n\) matrix. */ \ +/* The result is a \(m \times m\) matrix. */ \ +/* _x : input matrix, [size: _m x _n] */ \ +/* _m : input rows */ \ +/* _n : input columns */ \ +/* _xxH : output matrix, [size: _m x _m] */ \ +int MATRIX(_mul_hermitian)(T * _x, \ + unsigned int _m, \ + unsigned int _n, \ + T * _xxH); \ + \ +/* Compute \(\vec{X}^H\vec{X}\) on a \(m \times n\) matrix. */ \ +/* The result is a \(n \times n\) matrix. */ \ +/* _x : input matrix, [size: _m x _n] */ \ +/* _m : input rows */ \ +/* _n : input columns */ \ +/* _xHx : output matrix, [size: _n x _n] */ \ +int MATRIX(_hermitian_mul)(T * _x, \ + unsigned int _m, \ + unsigned int _n, \ + T * _xHx); \ + \ + \ +/* Augment two matrices \(\vec{X}\) and \(\vec{Y}\), storing the result */ \ +/* in \(\vec{Z}\) */ \ +/* NOTE: _rz = _rx = _ry, _rx = _ry, and _cz = _cx + _cy */ \ +/* _x : input matrix, [size: _rx x _cx] */ \ +/* _rx : number of rows in _x */ \ +/* _cx : number of columns in _x */ \ +/* _y : input matrix, [size: _ry x _cy] */ \ +/* _ry : number of rows in _y */ \ +/* _cy : number of columns in _y */ \ +/* _z : output matrix, [size: _rz x _cz] */ \ +/* _rz : number of rows in _z */ \ +/* _cz : number of columns in _z */ \ +int MATRIX(_aug)(T * _x, unsigned int _rx, unsigned int _cx, \ + T * _y, unsigned int _ry, unsigned int _cy, \ + T * _z, unsigned int _rz, unsigned int _cz); \ + \ +/* Compute the inverse of a square matrix \(\vec{X}\) */ \ +/* _x : input/output matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_inv)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Generate the identity square matrix of size \(n\) */ \ +/* _x : output matrix, [size: _n x _n] */ \ +/* _n : dimensions of _x */ \ +int MATRIX(_eye)(T * _x, \ + unsigned int _n); \ + \ +/* Generate the all-ones matrix of size \(n\) */ \ +/* _x : output matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_ones)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Generate the all-zeros matrix of size \(n\) */ \ +/* _x : output matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_zeros)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Perform Gauss-Jordan elimination on matrix \(\vec{X}\) */ \ +/* _x : input/output matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +int MATRIX(_gjelim)(T * _x, \ + unsigned int _r, \ + unsigned int _c); \ + \ +/* Pivot on element \(\vec{X}_{i,j}\) */ \ +/* _x : output matrix, [size: _r x _c] */ \ +/* _r : rows of _x */ \ +/* _c : columns of _x */ \ +/* _i : pivot row */ \ +/* _j : pivot column */ \ +int MATRIX(_pivot)(T * _x, \ + unsigned int _r, \ + unsigned int _c, \ + unsigned int _i, \ + unsigned int _j); \ + \ +/* Swap rows _r1 and _r2 of matrix \(\vec{X}\) */ \ +/* _x : input/output matrix, [size: _r x _c] */ \ +/* _r : rows of _x */ \ +/* _c : columns of _x */ \ +/* _r1 : first row to swap */ \ +/* _r2 : second row to swap */ \ +int MATRIX(_swaprows)(T * _x, \ + unsigned int _r, \ + unsigned int _c, \ + unsigned int _r1, \ + unsigned int _r2); \ + \ +/* Solve linear system of \(n\) equations: \(\vec{A}\vec{x} = \vec{b}\) */ \ +/* _A : system matrix, [size: _n x _n] */ \ +/* _n : system size */ \ +/* _b : equality vector, [size: _n x 1] */ \ +/* _x : solution vector, [size: _n x 1] */ \ +/* _opts : options (ignored for now) */ \ +int MATRIX(_linsolve)(T * _A, \ + unsigned int _n, \ + T * _b, \ + T * _x, \ + void * _opts); \ + \ +/* Solve linear system of equations using conjugate gradient method. */ \ +/* _A : symmetric positive definite square matrix */ \ +/* _n : system dimension */ \ +/* _b : equality, [size: _n x 1] */ \ +/* _x : solution estimate, [size: _n x 1] */ \ +/* _opts : options (ignored for now) */ \ +int MATRIX(_cgsolve)(T * _A, \ + unsigned int _n, \ + T * _b, \ + T * _x, \ + void * _opts); \ + \ +/* Perform L/U/P decomposition using Crout's method */ \ +/* _x : input/output matrix, [size: _rx x _cx] */ \ +/* _rx : rows of _x */ \ +/* _cx : columns of _x */ \ +/* _L : first row to swap */ \ +/* _U : first row to swap */ \ +/* _P : first row to swap */ \ +int MATRIX(_ludecomp_crout)(T * _x, \ + unsigned int _rx, \ + unsigned int _cx, \ + T * _L, \ + T * _U, \ + T * _P); \ + \ +/* Perform L/U/P decomposition, Doolittle's method */ \ +/* _x : input/output matrix, [size: _rx x _cx] */ \ +/* _rx : rows of _x */ \ +/* _cx : columns of _x */ \ +/* _L : first row to swap */ \ +/* _U : first row to swap */ \ +/* _P : first row to swap */ \ +int MATRIX(_ludecomp_doolittle)(T * _x, \ + unsigned int _rx, \ + unsigned int _cx, \ + T * _L, \ + T * _U, \ + T * _P); \ + \ +/* Perform orthnormalization using the Gram-Schmidt algorithm */ \ +/* _A : input matrix, [size: _r x _c] */ \ +/* _r : rows */ \ +/* _c : columns */ \ +/* _v : output matrix */ \ +int MATRIX(_gramschmidt)(T * _A, \ + unsigned int _r, \ + unsigned int _c, \ + T * _v); \ + \ +/* Perform Q/R decomposition using the Gram-Schmidt algorithm such that */ \ +/* \( \vec{A} = \vec{Q} \vec{R} \) */ \ +/* and \( \vec{Q}^T \vec{Q} = \vec{I}_n \) */ \ +/* and \(\vec{R\}\) is a diagonal \(m \times m\) matrix */ \ +/* NOTE: all matrices are square */ \ +/* _A : input matrix, [size: _m x _m] */ \ +/* _m : rows */ \ +/* _n : columns (same as cols) */ \ +/* _Q : output matrix, [size: _m x _m] */ \ +/* _R : output matrix, [size: _m x _m] */ \ +int MATRIX(_qrdecomp_gramschmidt)(T * _A, \ + unsigned int _m, \ + unsigned int _n, \ + T * _Q, \ + T * _R); \ + \ +/* Compute Cholesky decomposition of a symmetric/Hermitian */ \ +/* positive-definite matrix as \( \vec{A} = \vec{L}\vec{L}^T \) */ \ +/* _A : input square matrix, [size: _n x _n] */ \ +/* _n : input matrix dimension */ \ +/* _L : output lower-triangular matrix */ \ +int MATRIX(_chol)(T * _A, \ + unsigned int _n, \ + T * _L); \ + +#define matrix_access(X,R,C,r,c) ((X)[(r)*(C)+(c)]) + +#define matrixc_access(X,R,C,r,c) matrix_access(X,R,C,r,c) +#define matrixf_access(X,R,C,r,c) matrix_access(X,R,C,r,c) +#define matrixcf_access(X,R,C,r,c) matrix_access(X,R,C,r,c) + +LIQUID_MATRIX_DEFINE_API(LIQUID_MATRIX_MANGLE_FLOAT, float) +LIQUID_MATRIX_DEFINE_API(LIQUID_MATRIX_MANGLE_DOUBLE, double) + +LIQUID_MATRIX_DEFINE_API(LIQUID_MATRIX_MANGLE_CFLOAT, liquid_float_complex) +LIQUID_MATRIX_DEFINE_API(LIQUID_MATRIX_MANGLE_CDOUBLE, liquid_double_complex) + + +#define LIQUID_SMATRIX_MANGLE_BOOL(name) LIQUID_CONCAT(smatrixb, name) +#define LIQUID_SMATRIX_MANGLE_FLOAT(name) LIQUID_CONCAT(smatrixf, name) +#define LIQUID_SMATRIX_MANGLE_INT(name) LIQUID_CONCAT(smatrixi, name) + +// sparse 'alist' matrix type (similar to MacKay, Davey Lafferty convention) +// large macro +// SMATRIX : name-mangling macro +// T : primitive data type +#define LIQUID_SMATRIX_DEFINE_API(SMATRIX,T) \ + \ +/* Sparse matrix object (similar to MacKay, Davey, Lafferty convention) */ \ +typedef struct SMATRIX(_s) * SMATRIX(); \ + \ +/* Create _M x _N sparse matrix, initialized with zeros */ \ +SMATRIX() SMATRIX(_create)(unsigned int _M, \ + unsigned int _N); \ + \ +/* Create _M x _N sparse matrix, initialized on array */ \ +/* _x : input matrix, [size: _m x _n] */ \ +/* _m : number of rows in input matrix */ \ +/* _n : number of columns in input matrix */ \ +SMATRIX() SMATRIX(_create_array)(T * _x, \ + unsigned int _m, \ + unsigned int _n); \ + \ +/* Destroy object, freeing all internal memory */ \ +int SMATRIX(_destroy)(SMATRIX() _q); \ + \ +/* Print sparse matrix in compact form to stdout */ \ +int SMATRIX(_print)(SMATRIX() _q); \ + \ +/* Print sparse matrix in expanded form to stdout */ \ +int SMATRIX(_print_expanded)(SMATRIX() _q); \ + \ +/* Get size of sparse matrix (number of rows and columns) */ \ +/* _q : sparse matrix object */ \ +/* _m : number of rows in matrix */ \ +/* _n : number of columns in matrix */ \ +int SMATRIX(_size)(SMATRIX() _q, \ + unsigned int * _m, \ + unsigned int * _n); \ + \ +/* Zero all elements and retain allocated memory */ \ +int SMATRIX(_clear)(SMATRIX() _q); \ + \ +/* Zero all elements and clear memory */ \ +int SMATRIX(_reset)(SMATRIX() _q); \ + \ +/* Determine if value has been set (allocated memory) */ \ +/* _q : sparse matrix object */ \ +/* _m : row index of value to query */ \ +/* _n : column index of value to query */ \ +int SMATRIX(_isset)(SMATRIX() _q, \ + unsigned int _m, \ + unsigned int _n); \ + \ +/* Insert an element at index, allocating memory as necessary */ \ +/* _q : sparse matrix object */ \ +/* _m : row index of value to insert */ \ +/* _n : column index of value to insert */ \ +/* _v : value to insert */ \ +int SMATRIX(_insert)(SMATRIX() _q, \ + unsigned int _m, \ + unsigned int _n, \ + T _v); \ + \ +/* Delete an element at index, freeing memory */ \ +/* _q : sparse matrix object */ \ +/* _m : row index of value to delete */ \ +/* _n : column index of value to delete */ \ +int SMATRIX(_delete)(SMATRIX() _q, \ + unsigned int _m, \ + unsigned int _n); \ + \ +/* Set the value in matrix at specified row and column, allocating */ \ +/* memory if needed */ \ +/* _q : sparse matrix object */ \ +/* _m : row index of value to set */ \ +/* _n : column index of value to set */ \ +/* _v : value to set in matrix */ \ +int SMATRIX(_set)(SMATRIX() _q, \ + unsigned int _m, \ + unsigned int _n, \ + T _v); \ + \ +/* Get the value from matrix at specified row and column */ \ +/* _q : sparse matrix object */ \ +/* _m : row index of value to get */ \ +/* _n : column index of value to get */ \ +T SMATRIX(_get)(SMATRIX() _q, \ + unsigned int _m, \ + unsigned int _n); \ + \ +/* Initialize to identity matrix; set all diagonal elements to 1, all */ \ +/* others to 0. This is done with both square and non-square matrices. */ \ +int SMATRIX(_eye)(SMATRIX() _q); \ + \ +/* Multiply two sparse matrices, \( \vec{Z} = \vec{X} \vec{Y} \) */ \ +/* _x : sparse matrix object (input) */ \ +/* _y : sparse matrix object (input) */ \ +/* _z : sparse matrix object (output) */ \ +int SMATRIX(_mul)(SMATRIX() _x, \ + SMATRIX() _y, \ + SMATRIX() _z); \ + \ +/* Multiply sparse matrix by vector */ \ +/* _q : sparse matrix */ \ +/* _x : input vector, [size: _n x 1] */ \ +/* _y : output vector, [size: _m x 1] */ \ +int SMATRIX(_vmul)(SMATRIX() _q, \ + T * _x, \ + T * _y); \ + +LIQUID_SMATRIX_DEFINE_API(LIQUID_SMATRIX_MANGLE_BOOL, unsigned char) +LIQUID_SMATRIX_DEFINE_API(LIQUID_SMATRIX_MANGLE_FLOAT, float) +LIQUID_SMATRIX_DEFINE_API(LIQUID_SMATRIX_MANGLE_INT, short int) + +// +// smatrix cross methods +// + +// multiply sparse binary matrix by floating-point matrix +// _q : sparse matrix [size: A->M x A->N] +// _x : input vector [size: mx x nx ] +// _y : output vector [size: my x ny ] +int smatrixb_mulf(smatrixb _A, + float * _x, + unsigned int _mx, + unsigned int _nx, + float * _y, + unsigned int _my, + unsigned int _ny); + +// multiply sparse binary matrix by floating-point vector +// _q : sparse matrix +// _x : input vector [size: _N x 1] +// _y : output vector [size: _M x 1] +int smatrixb_vmulf(smatrixb _q, + float * _x, + float * _y); + + +// +// MODULE : modem (modulator/demodulator) +// + +// Maximum number of allowed bits per symbol +#define MAX_MOD_BITS_PER_SYMBOL 8 + +// Modulation schemes available +#define LIQUID_MODEM_NUM_SCHEMES (52) + +typedef enum { + LIQUID_MODEM_UNKNOWN=0, // Unknown modulation scheme + + // Phase-shift keying (PSK) + LIQUID_MODEM_PSK2, LIQUID_MODEM_PSK4, + LIQUID_MODEM_PSK8, LIQUID_MODEM_PSK16, + LIQUID_MODEM_PSK32, LIQUID_MODEM_PSK64, + LIQUID_MODEM_PSK128, LIQUID_MODEM_PSK256, + + // Differential phase-shift keying (DPSK) + LIQUID_MODEM_DPSK2, LIQUID_MODEM_DPSK4, + LIQUID_MODEM_DPSK8, LIQUID_MODEM_DPSK16, + LIQUID_MODEM_DPSK32, LIQUID_MODEM_DPSK64, + LIQUID_MODEM_DPSK128, LIQUID_MODEM_DPSK256, + + // amplitude-shift keying + LIQUID_MODEM_ASK2, LIQUID_MODEM_ASK4, + LIQUID_MODEM_ASK8, LIQUID_MODEM_ASK16, + LIQUID_MODEM_ASK32, LIQUID_MODEM_ASK64, + LIQUID_MODEM_ASK128, LIQUID_MODEM_ASK256, + + // rectangular quadrature amplitude-shift keying (QAM) + LIQUID_MODEM_QAM4, + LIQUID_MODEM_QAM8, LIQUID_MODEM_QAM16, + LIQUID_MODEM_QAM32, LIQUID_MODEM_QAM64, + LIQUID_MODEM_QAM128, LIQUID_MODEM_QAM256, + + // amplitude phase-shift keying (APSK) + LIQUID_MODEM_APSK4, + LIQUID_MODEM_APSK8, LIQUID_MODEM_APSK16, + LIQUID_MODEM_APSK32, LIQUID_MODEM_APSK64, + LIQUID_MODEM_APSK128, LIQUID_MODEM_APSK256, + + // specific modem types + LIQUID_MODEM_BPSK, // Specific: binary PSK + LIQUID_MODEM_QPSK, // specific: quaternary PSK + LIQUID_MODEM_OOK, // Specific: on/off keying + LIQUID_MODEM_SQAM32, // 'square' 32-QAM + LIQUID_MODEM_SQAM128, // 'square' 128-QAM + LIQUID_MODEM_V29, // V.29 star constellation + LIQUID_MODEM_ARB16OPT, // optimal 16-QAM + LIQUID_MODEM_ARB32OPT, // optimal 32-QAM + LIQUID_MODEM_ARB64OPT, // optimal 64-QAM + LIQUID_MODEM_ARB128OPT, // optimal 128-QAM + LIQUID_MODEM_ARB256OPT, // optimal 256-QAM + LIQUID_MODEM_ARB64VT, // Virginia Tech logo + + // arbitrary modem type + LIQUID_MODEM_ARB // arbitrary QAM +} modulation_scheme; + +// structure for holding full modulation type descriptor +struct modulation_type_s { + const char * name; // short name (e.g. 'bpsk') + const char * fullname; // full name (e.g. 'binary phase-shift keying') + modulation_scheme scheme; // modulation scheme (e.g. LIQUID_MODEM_BPSK) + unsigned int bps; // modulation depth (e.g. 1) +}; + +// full modulation type descriptor +extern const struct modulation_type_s modulation_types[LIQUID_MODEM_NUM_SCHEMES]; + +// Print compact list of existing and available modulation schemes +int liquid_print_modulation_schemes(); + +// returns modulation_scheme based on input string +modulation_scheme liquid_getopt_str2mod(const char * _str); + +// query basic modulation types +int liquid_modem_is_psk(modulation_scheme _ms); +int liquid_modem_is_dpsk(modulation_scheme _ms); +int liquid_modem_is_ask(modulation_scheme _ms); +int liquid_modem_is_qam(modulation_scheme _ms); +int liquid_modem_is_apsk(modulation_scheme _ms); + +// useful functions + +// counts the number of different bits between two symbols +unsigned int count_bit_errors(unsigned int _s1, unsigned int _s2); + +// counts the number of different bits between two arrays of symbols +// _msg0 : original message [size: _n x 1] +// _msg1 : copy of original message [size: _n x 1] +// _n : message size +unsigned int count_bit_errors_array(unsigned char * _msg0, + unsigned char * _msg1, + unsigned int _n); + +// converts binary-coded decimal (BCD) to gray, ensuring successive values +// differ by exactly one bit +unsigned int gray_encode(unsigned int symbol_in); + +// converts a gray-encoded symbol to binary-coded decimal (BCD) +unsigned int gray_decode(unsigned int symbol_in); + +// pack soft bits into symbol +// _soft_bits : soft input bits [size: _bps x 1] +// _bps : bits per symbol +// _sym_out : output symbol, value in [0,2^_bps) +int liquid_pack_soft_bits(unsigned char * _soft_bits, + unsigned int _bps, + unsigned int * _sym_out); + +// unpack soft bits into symbol +// _sym_in : input symbol, value in [0,2^_bps) +// _bps : bits per symbol +// _soft_bits : soft output bits [size: _bps x 1] +int liquid_unpack_soft_bits(unsigned int _sym_in, + unsigned int _bps, + unsigned char * _soft_bits); + + +// +// Linear modem +// + +#define LIQUID_MODEM_MANGLE_FLOAT(name) LIQUID_CONCAT(modem,name) + +// Macro : MODEM +// MODEM : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +#define LIQUID_MODEM_DEFINE_API(MODEM,T,TC) \ + \ +/* Linear modulator/demodulator (modem) object */ \ +typedef struct MODEM(_s) * MODEM(); \ + \ +/* Create digital modem object with a particular scheme */ \ +/* _scheme : linear modulation scheme (e.g. LIQUID_MODEM_QPSK) */ \ +MODEM() MODEM(_create)(modulation_scheme _scheme); \ + \ +/* Create linear digital modem object with arbitrary constellation */ \ +/* points defined by an external table of symbols. Sample points are */ \ +/* provided as complex float pairs and converted internally if needed. */ \ +/* _table : array of complex constellation points, [size: _M x 1] */ \ +/* _M : modulation order and table size, _M must be power of 2 */ \ +MODEM() MODEM(_create_arbitrary)(liquid_float_complex * _table, \ + unsigned int _M); \ + \ +/* Recreate modulation scheme, re-allocating memory as necessary */ \ +/* _q : modem object */ \ +/* _scheme : linear modulation scheme (e.g. LIQUID_MODEM_QPSK) */ \ +MODEM() MODEM(_recreate)(MODEM() _q, \ + modulation_scheme _scheme); \ + \ +/* Destroy modem object, freeing all allocated memory */ \ +int MODEM(_destroy)(MODEM() _q); \ + \ +/* Print modem status to stdout */ \ +int MODEM(_print)(MODEM() _q); \ + \ +/* Reset internal state of modem object; note that this is only */ \ +/* relevant for modulation types that retain an internal state such as */ \ +/* LIQUID_MODEM_DPSK4 as most linear modulation types are stateless */ \ +int MODEM(_reset)(MODEM() _q); \ + \ +/* Generate random symbol for modulation */ \ +unsigned int MODEM(_gen_rand_sym)(MODEM() _q); \ + \ +/* Get number of bits per symbol (bps) of modem object */ \ +unsigned int MODEM(_get_bps)(MODEM() _q); \ + \ +/* Get modulation scheme of modem object */ \ +modulation_scheme MODEM(_get_scheme)(MODEM() _q); \ + \ +/* Modulate input symbol (bits) and generate output complex sample */ \ +/* _q : modem object */ \ +/* _s : input symbol, 0 <= _s <= M-1 */ \ +/* _y : output complex sample */ \ +int MODEM(_modulate)(MODEM() _q, \ + unsigned int _s, \ + TC * _y); \ + \ +/* Demodulate input sample and provide maximum-likelihood estimate of */ \ +/* symbol that would have generated it. */ \ +/* The output is a hard decision value on the input sample. */ \ +/* This is performed efficiently by taking advantage of symmetry on */ \ +/* most modulation types. */ \ +/* For example, square and rectangular quadrature amplitude modulation */ \ +/* with gray coding can use a bisection search indepdently on its */ \ +/* in-phase and quadrature channels. */ \ +/* Arbitrary modulation schemes are relatively slow, however, for large */ \ +/* modulation types as the demodulator must compute the distance */ \ +/* between the received sample and all possible symbols to derive the */ \ +/* optimal symbol. */ \ +/* _q : modem object */ \ +/* _x : input sample */ \ +/* _s : output hard symbol, 0 <= _s <= M-1 */ \ +int MODEM(_demodulate)(MODEM() _q, \ + TC _x, \ + unsigned int * _s); \ + \ +/* Demodulate input sample and provide (approximate) log-likelihood */ \ +/* ratio (LLR, soft bits) as an output. */ \ +/* Similarly to the hard-decision demodulation method, this is computed */ \ +/* efficiently for most modulation types. */ \ +/* _q : modem object */ \ +/* _x : input sample */ \ +/* _s : output hard symbol, 0 <= _s <= M-1 */ \ +/* _soft_bits : output soft bits, [size: log2(M) x 1] */ \ +int MODEM(_demodulate_soft)(MODEM() _q, \ + TC _x, \ + unsigned int * _s, \ + unsigned char * _soft_bits); \ + \ +/* Get demodulator's estimated transmit sample */ \ +int MODEM(_get_demodulator_sample)(MODEM() _q, \ + TC * _x_hat); \ + \ +/* Get demodulator phase error */ \ +float MODEM(_get_demodulator_phase_error)(MODEM() _q); \ + \ +/* Get demodulator error vector magnitude */ \ +float MODEM(_get_demodulator_evm)(MODEM() _q); \ + +// define modem APIs +LIQUID_MODEM_DEFINE_API(LIQUID_MODEM_MANGLE_FLOAT,float,liquid_float_complex) + + +// +// continuous-phase modulation +// + +// gmskmod : GMSK modulator +typedef struct gmskmod_s * gmskmod; + +// create gmskmod object +// _k : samples/symbol +// _m : filter delay (symbols) +// _BT : excess bandwidth factor +gmskmod gmskmod_create(unsigned int _k, + unsigned int _m, + float _BT); +int gmskmod_destroy(gmskmod _q); +int gmskmod_print(gmskmod _q); +int gmskmod_reset(gmskmod _q); +int gmskmod_modulate(gmskmod _q, + unsigned int _sym, + liquid_float_complex * _y); + + +// gmskdem : GMSK demodulator +typedef struct gmskdem_s * gmskdem; + +// create gmskdem object +// _k : samples/symbol +// _m : filter delay (symbols) +// _BT : excess bandwidth factor +gmskdem gmskdem_create(unsigned int _k, + unsigned int _m, + float _BT); +int gmskdem_destroy(gmskdem _q); +int gmskdem_print(gmskdem _q); +int gmskdem_reset(gmskdem _q); +int gmskdem_set_eq_bw(gmskdem _q, float _bw); +int gmskdem_demodulate(gmskdem _q, + liquid_float_complex * _y, + unsigned int * _sym); + +// +// continuous phase frequency-shift keying (CP-FSK) modems +// + +// CP-FSK filter prototypes +typedef enum { + LIQUID_CPFSK_SQUARE=0, // square pulse + LIQUID_CPFSK_RCOS_FULL, // raised-cosine (full response) + LIQUID_CPFSK_RCOS_PARTIAL, // raised-cosine (partial response) + LIQUID_CPFSK_GMSK, // Gauss minimum-shift keying pulse +} liquid_cpfsk_filter; + +// CP-FSK modulator +typedef struct cpfskmod_s * cpfskmod; + +// create cpfskmod object (frequency modulator) +// _bps : bits per symbol, _bps > 0 +// _h : modulation index, _h > 0 +// _k : samples/symbol, _k > 1, _k even +// _m : filter delay (symbols), _m > 0 +// _beta : filter bandwidth parameter, _beta > 0 +// _type : filter type (e.g. LIQUID_CPFSK_SQUARE) +cpfskmod cpfskmod_create(unsigned int _bps, + float _h, + unsigned int _k, + unsigned int _m, + float _beta, + int _type); +//cpfskmod cpfskmod_create_msk(unsigned int _k); +//cpfskmod cpfskmod_create_gmsk(unsigned int _k, float _BT); + +// destroy cpfskmod object +int cpfskmod_destroy(cpfskmod _q); + +// print cpfskmod object internals +int cpfskmod_print(cpfskmod _q); + +// reset state +int cpfskmod_reset(cpfskmod _q); + +// get transmit delay [symbols] +unsigned int cpfskmod_get_delay(cpfskmod _q); + +// modulate sample +// _q : frequency modulator object +// _s : input symbol +// _y : output sample array [size: _k x 1] +int cpfskmod_modulate(cpfskmod _q, + unsigned int _s, + liquid_float_complex * _y); + + + +// CP-FSK demodulator +typedef struct cpfskdem_s * cpfskdem; + +// create cpfskdem object (frequency modulator) +// _bps : bits per symbol, _bps > 0 +// _h : modulation index, _h > 0 +// _k : samples/symbol, _k > 1, _k even +// _m : filter delay (symbols), _m > 0 +// _beta : filter bandwidth parameter, _beta > 0 +// _type : filter type (e.g. LIQUID_CPFSK_SQUARE) +cpfskdem cpfskdem_create(unsigned int _bps, + float _h, + unsigned int _k, + unsigned int _m, + float _beta, + int _type); +//cpfskdem cpfskdem_create_msk(unsigned int _k); +//cpfskdem cpfskdem_create_gmsk(unsigned int _k, float _BT); + +// destroy cpfskdem object +int cpfskdem_destroy(cpfskdem _q); + +// print cpfskdem object internals +int cpfskdem_print(cpfskdem _q); + +// reset state +int cpfskdem_reset(cpfskdem _q); + +// get receive delay [symbols] +unsigned int cpfskdem_get_delay(cpfskdem _q); + +#if 0 +// demodulate array of samples +// _q : continuous-phase frequency demodulator object +// _y : input sample array [size: _n x 1] +// _n : input sample array length +// _s : output symbol array +// _nw : number of output symbols written +int cpfskdem_demodulate(cpfskdem _q, + liquid_float_complex * _y, + unsigned int _n, + unsigned int * _s, + unsigned int * _nw); +#else +// demodulate array of samples, assuming perfect timing +// _q : continuous-phase frequency demodulator object +// _y : input sample array [size: _k x 1] +unsigned int cpfskdem_demodulate(cpfskdem _q, + liquid_float_complex * _y); +#endif + + + +// +// M-ary frequency-shift keying (MFSK) modems +// + +// FSK modulator +typedef struct fskmod_s * fskmod; + +// create fskmod object (frequency modulator) +// _m : bits per symbol, _bps > 0 +// _k : samples/symbol, _k >= 2^_m +// _bandwidth : total signal bandwidth, (0,0.5) +fskmod fskmod_create(unsigned int _m, + unsigned int _k, + float _bandwidth); + +// destroy fskmod object +int fskmod_destroy(fskmod _q); + +// print fskmod object internals +int fskmod_print(fskmod _q); + +// reset state +int fskmod_reset(fskmod _q); + +// modulate sample +// _q : frequency modulator object +// _s : input symbol +// _y : output sample array [size: _k x 1] +int fskmod_modulate(fskmod _q, + unsigned int _s, + liquid_float_complex * _y); + + + +// FSK demodulator +typedef struct fskdem_s * fskdem; + +// create fskdem object (frequency demodulator) +// _m : bits per symbol, _bps > 0 +// _k : samples/symbol, _k >= 2^_m +// _bandwidth : total signal bandwidth, (0,0.5) +fskdem fskdem_create(unsigned int _m, + unsigned int _k, + float _bandwidth); + +// destroy fskdem object +int fskdem_destroy(fskdem _q); + +// print fskdem object internals +int fskdem_print(fskdem _q); + +// reset state +int fskdem_reset(fskdem _q); + +// demodulate symbol, assuming perfect symbol timing +// _q : fskdem object +// _y : input sample array [size: _k x 1] +unsigned int fskdem_demodulate(fskdem _q, + liquid_float_complex * _y); + +// get demodulator frequency error +float fskdem_get_frequency_error(fskdem _q); + +// get energy for a particular symbol within a certain range +float fskdem_get_symbol_energy(fskdem _q, + unsigned int _s, + unsigned int _range); + + +// +// Analog frequency modulator +// +#define LIQUID_FREQMOD_MANGLE_FLOAT(name) LIQUID_CONCAT(freqmod,name) + +// Macro : FREQMOD (analog frequency modulator) +// FREQMOD : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +#define LIQUID_FREQMOD_DEFINE_API(FREQMOD,T,TC) \ + \ +/* Analog frequency modulation object */ \ +typedef struct FREQMOD(_s) * FREQMOD(); \ + \ +/* Create freqmod object with a particular modulation factor */ \ +/* _kf : modulation factor */ \ +FREQMOD() FREQMOD(_create)(float _kf); \ + \ +/* Destroy freqmod object, freeing all internal memory */ \ +int FREQMOD(_destroy)(FREQMOD() _q); \ + \ +/* Print freqmod object internals to stdout */ \ +int FREQMOD(_print)(FREQMOD() _q); \ + \ +/* Reset state */ \ +int FREQMOD(_reset)(FREQMOD() _q); \ + \ +/* Modulate single sample, producing single output sample at complex */ \ +/* baseband. */ \ +/* _q : frequency modulator object */ \ +/* _m : message signal \( m(t) \) */ \ +/* _s : complex baseband signal \( s(t) \) */ \ +int FREQMOD(_modulate)(FREQMOD() _q, \ + T _m, \ + TC * _s); \ + \ +/* Modulate block of samples */ \ +/* _q : frequency modulator object */ \ +/* _m : message signal \( m(t) \), [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _s : complex baseband signal \( s(t) \), [size: _n x 1] */ \ +int FREQMOD(_modulate_block)(FREQMOD() _q, \ + T * _m, \ + unsigned int _n, \ + TC * _s); \ + +// define freqmod APIs +LIQUID_FREQMOD_DEFINE_API(LIQUID_FREQMOD_MANGLE_FLOAT,float,liquid_float_complex) + +// +// Analog frequency demodulator +// + +#define LIQUID_FREQDEM_MANGLE_FLOAT(name) LIQUID_CONCAT(freqdem,name) + +// Macro : FREQDEM (analog frequency modulator) +// FREQDEM : name-mangling macro +// T : primitive data type +// TC : primitive data type (complex) +#define LIQUID_FREQDEM_DEFINE_API(FREQDEM,T,TC) \ +typedef struct FREQDEM(_s) * FREQDEM(); \ + \ +/* create freqdem object (frequency modulator) */ \ +/* _kf : modulation factor */ \ +FREQDEM() FREQDEM(_create)(float _kf); \ + \ +/* destroy freqdem object */ \ +int FREQDEM(_destroy)(FREQDEM() _q); \ + \ +/* print freqdem object internals */ \ +int FREQDEM(_print)(FREQDEM() _q); \ + \ +/* reset state */ \ +int FREQDEM(_reset)(FREQDEM() _q); \ + \ +/* demodulate sample */ \ +/* _q : frequency modulator object */ \ +/* _r : received signal r(t) */ \ +/* _m : output message signal m(t) */ \ +int FREQDEM(_demodulate)(FREQDEM() _q, \ + TC _r, \ + T * _m); \ + \ +/* demodulate block of samples */ \ +/* _q : frequency demodulator object */ \ +/* _r : received signal r(t) [size: _n x 1] */ \ +/* _n : number of input, output samples */ \ +/* _m : message signal m(t), [size: _n x 1] */ \ +int FREQDEM(_demodulate_block)(FREQDEM() _q, \ + TC * _r, \ + unsigned int _n, \ + T * _m); \ + +// define freqdem APIs +LIQUID_FREQDEM_DEFINE_API(LIQUID_FREQDEM_MANGLE_FLOAT,float,liquid_float_complex) + + + +// amplitude modulation types +typedef enum { + LIQUID_AMPMODEM_DSB=0, // double side-band + LIQUID_AMPMODEM_USB, // single side-band (upper) + LIQUID_AMPMODEM_LSB // single side-band (lower) +} liquid_ampmodem_type; + +typedef struct ampmodem_s * ampmodem; + +// create ampmodem object +// _m : modulation index +// _type : AM type (e.g. LIQUID_AMPMODEM_DSB) +// _suppressed_carrier : carrier suppression flag +ampmodem ampmodem_create(float _mod_index, + liquid_ampmodem_type _type, + int _suppressed_carrier); + +// destroy ampmodem object +int ampmodem_destroy(ampmodem _q); + +// print ampmodem object internals +int ampmodem_print(ampmodem _q); + +// reset ampmodem object state +int ampmodem_reset(ampmodem _q); + +// accessor methods +unsigned int ampmodem_get_delay_mod (ampmodem _q); +unsigned int ampmodem_get_delay_demod(ampmodem _q); + +// modulate sample +int ampmodem_modulate(ampmodem _q, + float _x, + liquid_float_complex * _y); + +int ampmodem_modulate_block(ampmodem _q, + float * _m, + unsigned int _n, + liquid_float_complex * _s); + +// demodulate sample +int ampmodem_demodulate(ampmodem _q, + liquid_float_complex _y, + float * _x); + +int ampmodem_demodulate_block(ampmodem _q, + liquid_float_complex * _r, + unsigned int _n, + float * _m); + +// +// MODULE : multichannel +// + + +#define FIRPFBCH_NYQUIST 0 +#define FIRPFBCH_ROOTNYQUIST 1 + +#define LIQUID_ANALYZER 0 +#define LIQUID_SYNTHESIZER 1 + + +// +// Finite impulse response polyphase filterbank channelizer +// + +#define LIQUID_FIRPFBCH_MANGLE_CRCF(name) LIQUID_CONCAT(firpfbch_crcf,name) +#define LIQUID_FIRPFBCH_MANGLE_CCCF(name) LIQUID_CONCAT(firpfbch_cccf,name) + +// Macro: +// FIRPFBCH : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FIRPFBCH_DEFINE_API(FIRPFBCH,TO,TC,TI) \ +typedef struct FIRPFBCH(_s) * FIRPFBCH(); \ + \ +/* create finite impulse response polyphase filter-bank */ \ +/* channelizer object from external coefficients */ \ +/* _type : channelizer type, e.g. LIQUID_ANALYZER */ \ +/* _M : number of channels */ \ +/* _p : number of coefficients for each channel */ \ +/* _h : coefficients [size: _M*_p x 1] */ \ +FIRPFBCH() FIRPFBCH(_create)(int _type, \ + unsigned int _M, \ + unsigned int _p, \ + TC * _h); \ + \ +/* create FIR polyphase filterbank channelizer object with */ \ +/* prototype filter based on windowed Kaiser design */ \ +/* _type : type (LIQUID_ANALYZER | LIQUID_SYNTHESIZER) */ \ +/* _M : number of channels */ \ +/* _m : filter delay (symbols) */ \ +/* _As : stop-band attentuation [dB] */ \ +FIRPFBCH() FIRPFBCH(_create_kaiser)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + float _As); \ + \ +/* create FIR polyphase filterbank channelizer object with */ \ +/* prototype root-Nyquist filter */ \ +/* _type : type (LIQUID_ANALYZER | LIQUID_SYNTHESIZER) */ \ +/* _M : number of channels */ \ +/* _m : filter delay (symbols) */ \ +/* _beta : filter excess bandwidth factor, in [0,1] */ \ +/* _ftype : filter prototype (rrcos, rkaiser, etc.) */ \ +FIRPFBCH() FIRPFBCH(_create_rnyquist)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + float _beta, \ + int _ftype); \ + \ +/* destroy firpfbch object */ \ +int FIRPFBCH(_destroy)(FIRPFBCH() _q); \ + \ +/* clear/reset firpfbch internal state */ \ +int FIRPFBCH(_reset)(FIRPFBCH() _q); \ + \ +/* print firpfbch internal parameters to stdout */ \ +int FIRPFBCH(_print)(FIRPFBCH() _q); \ + \ +/* execute filterbank as synthesizer on block of samples */ \ +/* _q : filterbank channelizer object */ \ +/* _x : channelized input, [size: num_channels x 1] */ \ +/* _y : output time series, [size: num_channels x 1] */ \ +int FIRPFBCH(_synthesizer_execute)(FIRPFBCH() _q, \ + TI * _x, \ + TO * _y); \ + \ +/* execute filterbank as analyzer on block of samples */ \ +/* _q : filterbank channelizer object */ \ +/* _x : input time series, [size: num_channels x 1] */ \ +/* _y : channelized output, [size: num_channels x 1] */ \ +int FIRPFBCH(_analyzer_execute)(FIRPFBCH() _q, \ + TI * _x, \ + TO * _y); \ + + +LIQUID_FIRPFBCH_DEFINE_API(LIQUID_FIRPFBCH_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +LIQUID_FIRPFBCH_DEFINE_API(LIQUID_FIRPFBCH_MANGLE_CCCF, + liquid_float_complex, + liquid_float_complex, + liquid_float_complex) + + +// +// Finite impulse response polyphase filterbank channelizer +// with output rate 2 Fs / M +// + +#define LIQUID_FIRPFBCH2_MANGLE_CRCF(name) LIQUID_CONCAT(firpfbch2_crcf,name) + +// Macro: +// FIRPFBCH2 : name-mangling macro +// TO : output data type +// TC : coefficients data type +// TI : input data type +#define LIQUID_FIRPFBCH2_DEFINE_API(FIRPFBCH2,TO,TC,TI) \ +typedef struct FIRPFBCH2(_s) * FIRPFBCH2(); \ + \ +/* create firpfbch2 object */ \ +/* _type : channelizer type (e.g. LIQUID_ANALYZER) */ \ +/* _M : number of channels (must be even) */ \ +/* _m : prototype filter semi-length, length=2*M*m */ \ +/* _h : prototype filter coefficient array */ \ +FIRPFBCH2() FIRPFBCH2(_create)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + TC * _h); \ + \ +/* create firpfbch2 object using Kaiser window prototype */ \ +/* _type : channelizer type (e.g. LIQUID_ANALYZER) */ \ +/* _M : number of channels (must be even) */ \ +/* _m : prototype filter semi-length, length=2*M*m+1 */ \ +/* _As : filter stop-band attenuation [dB] */ \ +FIRPFBCH2() FIRPFBCH2(_create_kaiser)(int _type, \ + unsigned int _M, \ + unsigned int _m, \ + float _As); \ + \ +/* destroy firpfbch2 object, freeing internal memory */ \ +int FIRPFBCH2(_destroy)(FIRPFBCH2() _q); \ + \ +/* reset firpfbch2 object internals */ \ +int FIRPFBCH2(_reset)(FIRPFBCH2() _q); \ + \ +/* print firpfbch2 object internals */ \ +int FIRPFBCH2(_print)(FIRPFBCH2() _q); \ + \ +/* execute filterbank channelizer */ \ +/* LIQUID_ANALYZER: input: M/2, output: M */ \ +/* LIQUID_SYNTHESIZER: input: M, output: M/2 */ \ +/* _x : channelizer input */ \ +/* _y : channelizer output */ \ +int FIRPFBCH2(_execute)(FIRPFBCH2() _q, \ + TI * _x, \ + TO * _y); \ + + +LIQUID_FIRPFBCH2_DEFINE_API(LIQUID_FIRPFBCH2_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + +// +// Finite impulse response polyphase filterbank channelizer +// with output rate Fs * P / M +// + +#define LIQUID_FIRPFBCHR_MANGLE_CRCF(name) LIQUID_CONCAT(firpfbchr_crcf,name) + +#define LIQUID_FIRPFBCHR_DEFINE_API(FIRPFBCHR,TO,TC,TI) \ +typedef struct FIRPFBCHR(_s) * FIRPFBCHR(); \ + \ +/* create rational rate resampling channelizer (firpfbchr) object by */ \ +/* specifying filter coefficients directly */ \ +/* _M : number of output channels in chanelizer */ \ +/* _P : output decimation factor (output rate is 1/P the input) */ \ +/* _m : prototype filter semi-length, length=2*M*m */ \ +/* _h : prototype filter coefficient array, [size: 2*M*m x 1] */ \ +FIRPFBCHR() FIRPFBCHR(_create)(unsigned int _M, \ + unsigned int _P, \ + unsigned int _m, \ + TC * _h); \ + \ +/* create rational rate resampling channelizer (firpfbchr) object by */ \ +/* specifying filter design parameters for Kaiser prototype */ \ +/* _M : number of output channels in chanelizer */ \ +/* _P : output decimation factor (output rate is 1/P the input) */ \ +/* _m : prototype filter semi-length, length=2*M*m */ \ +/* _As : filter stop-band attenuation [dB] */ \ +FIRPFBCHR() FIRPFBCHR(_create_kaiser)(unsigned int _M, \ + unsigned int _P, \ + unsigned int _m, \ + float _As); \ + \ +/* destroy firpfbchr object, freeing internal memory */ \ +int FIRPFBCHR(_destroy)(FIRPFBCHR() _q); \ + \ +/* reset firpfbchr object internal state and buffers */ \ +int FIRPFBCHR(_reset)(FIRPFBCHR() _q); \ + \ +/* print firpfbchr object internals to stdout */ \ +int FIRPFBCHR(_print)(FIRPFBCHR() _q); \ + \ +/* get number of output channels to channelizer */ \ +unsigned int FIRPFBCHR(_get_M)(FIRPFBCHR() _q); \ + \ +/* get decimation factor for channelizer */ \ +unsigned int FIRPFBCHR(_get_P)(FIRPFBCHR() _q); \ + \ +/* get semi-length to channelizer filter prototype */ \ +unsigned int FIRPFBCHR(_get_m)(FIRPFBCHR() _q); \ + \ +/* push buffer of samples into filter bank */ \ +/* _q : channelizer object */ \ +/* _x : channelizer input [size: P x 1] */ \ +int FIRPFBCHR(_push)(FIRPFBCHR() _q, \ + TI * _x); \ + \ +/* execute filterbank channelizer, writing complex baseband samples for */ \ +/* each channel into output array */ \ +/* _q : channelizer object */ \ +/* _y : channelizer output [size: _M x 1] */ \ +int FIRPFBCHR(_execute)(FIRPFBCHR() _q, \ + TO * _y); \ + + +LIQUID_FIRPFBCHR_DEFINE_API(LIQUID_FIRPFBCHR_MANGLE_CRCF, + liquid_float_complex, + float, + liquid_float_complex) + + + +#define OFDMFRAME_SCTYPE_NULL 0 +#define OFDMFRAME_SCTYPE_PILOT 1 +#define OFDMFRAME_SCTYPE_DATA 2 + +// initialize default subcarrier allocation +// _M : number of subcarriers +// _p : output subcarrier allocation array, [size: _M x 1] +int ofdmframe_init_default_sctype(unsigned int _M, + unsigned char * _p); + +// initialize default subcarrier allocation +// _M : number of subcarriers +// _f0 : lower frequency band, _f0 in [-0.5,0.5] +// _f1 : upper frequency band, _f1 in [-0.5,0.5] +// _p : output subcarrier allocation array, [size: _M x 1] +int ofdmframe_init_sctype_range(unsigned int _M, + float _f0, + float _f1, + unsigned char * _p); + +// validate subcarrier type (count number of null, pilot, and data +// subcarriers in the allocation) +// _p : subcarrier allocation array, [size: _M x 1] +// _M : number of subcarriers +// _M_null : output number of null subcarriers +// _M_pilot : output number of pilot subcarriers +// _M_data : output number of data subcarriers +int ofdmframe_validate_sctype(unsigned char * _p, + unsigned int _M, + unsigned int * _M_null, + unsigned int * _M_pilot, + unsigned int * _M_data); + +// print subcarrier allocation to screen +// _p : output subcarrier allocation array, [size: _M x 1] +// _M : number of subcarriers +int ofdmframe_print_sctype(unsigned char * _p, + unsigned int _M); + + +// +// OFDM frame (symbol) generator +// +typedef struct ofdmframegen_s * ofdmframegen; + +// create OFDM framing generator object +// _M : number of subcarriers, >10 typical +// _cp_len : cyclic prefix length +// _taper_len : taper length (OFDM symbol overlap) +// _p : subcarrier allocation (null, pilot, data), [size: _M x 1] +ofdmframegen ofdmframegen_create(unsigned int _M, + unsigned int _cp_len, + unsigned int _taper_len, + unsigned char * _p); + +int ofdmframegen_destroy(ofdmframegen _q); + +int ofdmframegen_print(ofdmframegen _q); + +int ofdmframegen_reset(ofdmframegen _q); + +// write first S0 symbol +int ofdmframegen_write_S0a(ofdmframegen _q, + liquid_float_complex *_y); + +// write second S0 symbol +int ofdmframegen_write_S0b(ofdmframegen _q, + liquid_float_complex *_y); + +// write S1 symbol +int ofdmframegen_write_S1(ofdmframegen _q, + liquid_float_complex *_y); + +// write data symbol +int ofdmframegen_writesymbol(ofdmframegen _q, + liquid_float_complex * _x, + liquid_float_complex *_y); + +// write tail +int ofdmframegen_writetail(ofdmframegen _q, + liquid_float_complex * _x); + +// +// OFDM frame (symbol) synchronizer +// +typedef int (*ofdmframesync_callback)(liquid_float_complex * _y, + unsigned char * _p, + unsigned int _M, + void * _userdata); +typedef struct ofdmframesync_s * ofdmframesync; + +// create OFDM framing synchronizer object +// _M : number of subcarriers, >10 typical +// _cp_len : cyclic prefix length +// _taper_len : taper length (OFDM symbol overlap) +// _p : subcarrier allocation (null, pilot, data), [size: _M x 1] +// _callback : user-defined callback function +// _userdata : user-defined data pointer +ofdmframesync ofdmframesync_create(unsigned int _M, + unsigned int _cp_len, + unsigned int _taper_len, + unsigned char * _p, + ofdmframesync_callback _callback, + void * _userdata); +int ofdmframesync_destroy(ofdmframesync _q); +int ofdmframesync_print(ofdmframesync _q); +int ofdmframesync_reset(ofdmframesync _q); +int ofdmframesync_is_frame_open(ofdmframesync _q); +int ofdmframesync_execute(ofdmframesync _q, + liquid_float_complex * _x, + unsigned int _n); + +// query methods +float ofdmframesync_get_rssi(ofdmframesync _q); // received signal strength indication +float ofdmframesync_get_cfo(ofdmframesync _q); // carrier offset estimate + +// set methods +int ofdmframesync_set_cfo(ofdmframesync _q, float _cfo); // set carrier offset estimate + +// debugging +int ofdmframesync_debug_enable(ofdmframesync _q); +int ofdmframesync_debug_disable(ofdmframesync _q); +int ofdmframesync_debug_print(ofdmframesync _q, const char * _filename); + + +// +// MODULE : nco (numerically-controlled oscillator) +// + +// oscillator type +// LIQUID_NCO : numerically-controlled oscillator (fast) +// LIQUID_VCO : "voltage"-controlled oscillator (precise) +typedef enum { + LIQUID_NCO=0, + LIQUID_VCO +} liquid_ncotype; + +#define LIQUID_NCO_MANGLE_FLOAT(name) LIQUID_CONCAT(nco_crcf, name) + +// large macro +// NCO : name-mangling macro +// T : primitive data type +// TC : input/output data type +#define LIQUID_NCO_DEFINE_API(NCO,T,TC) \ + \ +/* Numerically-controlled oscillator object */ \ +typedef struct NCO(_s) * NCO(); \ + \ +/* Create nco object with either fixed-point or floating-point phase */ \ +/* _type : oscillator type, _type in {LIQUID_NCO, LIQUID_VCO} */ \ +NCO() NCO(_create)(liquid_ncotype _type); \ + \ +/* Destroy nco object, freeing all internally allocated memory */ \ +int NCO(_destroy)(NCO() _q); \ + \ +/* Print nco object internals to stdout */ \ +int NCO(_print)(NCO() _q); \ + \ +/* Set phase/frequency to zero and reset the phase-locked loop filter */ \ +/* state */ \ +int NCO(_reset)(NCO() _q); \ + \ +/* Get frequency of nco object in radians per sample */ \ +T NCO(_get_frequency)(NCO() _q); \ + \ +/* Set frequency of nco object in radians per sample */ \ +/* _q : nco object */ \ +/* _dtheta : input frequency [radians/sample] */ \ +int NCO(_set_frequency)(NCO() _q, \ + T _dtheta); \ + \ +/* Adjust frequency of nco object by a step size in radians per sample */ \ +/* _q : nco object */ \ +/* _step : input frequency step [radians/sample] */ \ +int NCO(_adjust_frequency)(NCO() _q, \ + T _step); \ + \ +/* Get phase of nco object in radians */ \ +T NCO(_get_phase)(NCO() _q); \ + \ +/* Set phase of nco object in radians */ \ +/* _q : nco object */ \ +/* _phi : input phase of nco object [radians] */ \ +int NCO(_set_phase)(NCO() _q, \ + T _phi); \ + \ +/* Adjust phase of nco object by a step of \(\Delta \phi\) radians */ \ +/* _q : nco object */ \ +/* _dphi : input nco object phase adjustment [radians] */ \ +int NCO(_adjust_phase)(NCO() _q, \ + T _dphi); \ + \ +/* Increment phase by internal phase step (frequency) */ \ +int NCO(_step)(NCO() _q); \ + \ +/* Compute sine output given internal phase */ \ +T NCO(_sin)(NCO() _q); \ + \ +/* Compute cosine output given internal phase */ \ +T NCO(_cos)(NCO() _q); \ + \ +/* Compute sine and cosine outputs given internal phase */ \ +/* _q : nco object */ \ +/* _s : output sine component of phase */ \ +/* _c : output cosine component of phase */ \ +int NCO(_sincos)(NCO() _q, \ + T * _s, \ + T * _c); \ + \ +/* Compute complex exponential output given internal phase */ \ +/* _q : nco object */ \ +/* _y : output complex exponential */ \ +int NCO(_cexpf)(NCO() _q, \ + TC * _y); \ + \ +/* Set bandwidth of internal phase-locked loop */ \ +/* _q : nco object */ \ +/* _bw : input phase-locked loop bandwidth, _bw >= 0 */ \ +int NCO(_pll_set_bandwidth)(NCO() _q, \ + T _bw); \ + \ +/* Step internal phase-locked loop given input phase error, adjusting */ \ +/* internal phase and frequency proportional to coefficients defined by */ \ +/* internal PLL bandwidth */ \ +/* _q : nco object */ \ +/* _dphi : input phase-locked loop phase error */ \ +int NCO(_pll_step)(NCO() _q, \ + T _dphi); \ + \ +/* Rotate input sample up by nco angle. */ \ +/* Note that this does not adjust the internal phase or frequency. */ \ +/* _q : nco object */ \ +/* _x : input complex sample */ \ +/* _y : pointer to output sample location */ \ +int NCO(_mix_up)(NCO() _q, \ + TC _x, \ + TC * _y); \ + \ +/* Rotate input sample down by nco angle. */ \ +/* Note that this does not adjust the internal phase or frequency. */ \ +/* _q : nco object */ \ +/* _x : input complex sample */ \ +/* _y : pointer to output sample location */ \ +int NCO(_mix_down)(NCO() _q, \ + TC _x, \ + TC * _y); \ + \ +/* Rotate input vector up by NCO angle (stepping) */ \ +/* Note that this *does* adjust the internal phase as the signal steps */ \ +/* through each input sample. */ \ +/* _q : nco object */ \ +/* _x : array of input samples, [size: _n x 1] */ \ +/* _y : array of output samples, [size: _n x 1] */ \ +/* _n : number of input (and output) samples */ \ +int NCO(_mix_block_up)(NCO() _q, \ + TC * _x, \ + TC * _y, \ + unsigned int _n); \ + \ +/* Rotate input vector down by NCO angle (stepping) */ \ +/* Note that this *does* adjust the internal phase as the signal steps */ \ +/* through each input sample. */ \ +/* _q : nco object */ \ +/* _x : array of input samples, [size: _n x 1] */ \ +/* _y : array of output samples, [size: _n x 1] */ \ +/* _n : number of input (and output) samples */ \ +int NCO(_mix_block_down)(NCO() _q, \ + TC * _x, \ + TC * _y, \ + unsigned int _n); \ + +// Define nco APIs +LIQUID_NCO_DEFINE_API(LIQUID_NCO_MANGLE_FLOAT, float, liquid_float_complex) + + +// nco utilities + +// unwrap phase of array (basic) +void liquid_unwrap_phase(float * _theta, unsigned int _n); + +// unwrap phase of array (advanced) +void liquid_unwrap_phase2(float * _theta, unsigned int _n); + +#define SYNTH_MANGLE_FLOAT(name) LIQUID_CONCAT(synth_crcf, name) + +// large macro +// SYNTH : name-mangling macro +// T : primitive data type +// TC : input/output data type +#define LIQUID_SYNTH_DEFINE_API(SYNTH,T,TC) \ +typedef struct SYNTH(_s) * SYNTH(); \ + \ +SYNTH() SYNTH(_create)(const TC *_table, unsigned int _length); \ +void SYNTH(_destroy)(SYNTH() _q); \ + \ +void SYNTH(_reset)(SYNTH() _q); \ + \ +/* get/set/adjust internal frequency/phase */ \ +T SYNTH(_get_frequency)( SYNTH() _q); \ +void SYNTH(_set_frequency)( SYNTH() _q, T _f); \ +void SYNTH(_adjust_frequency)(SYNTH() _q, T _df); \ +T SYNTH(_get_phase)( SYNTH() _q); \ +void SYNTH(_set_phase)( SYNTH() _q, T _phi); \ +void SYNTH(_adjust_phase)( SYNTH() _q, T _dphi); \ + \ +unsigned int SYNTH(_get_length)(SYNTH() _q); \ +TC SYNTH(_get_current)(SYNTH() _q); \ +TC SYNTH(_get_half_previous)(SYNTH() _q); \ +TC SYNTH(_get_half_next)(SYNTH() _q); \ + \ +void SYNTH(_step)(SYNTH() _q); \ + \ +/* pll : phase-locked loop */ \ +void SYNTH(_pll_set_bandwidth)(SYNTH() _q, T _bandwidth); \ +void SYNTH(_pll_step)(SYNTH() _q, T _dphi); \ + \ +/* Rotate input sample up by SYNTH angle (no stepping) */ \ +void SYNTH(_mix_up)(SYNTH() _q, TC _x, TC *_y); \ + \ +/* Rotate input sample down by SYNTH angle (no stepping) */ \ +void SYNTH(_mix_down)(SYNTH() _q, TC _x, TC *_y); \ + \ +/* Rotate input vector up by SYNTH angle (stepping) */ \ +void SYNTH(_mix_block_up)(SYNTH() _q, \ + TC *_x, \ + TC *_y, \ + unsigned int _N); \ + \ +/* Rotate input vector down by SYNTH angle (stepping) */ \ +void SYNTH(_mix_block_down)(SYNTH() _q, \ + TC *_x, \ + TC *_y, \ + unsigned int _N); \ + \ +void SYNTH(_spread)(SYNTH() _q, \ + TC _x, \ + TC *_y); \ + \ +void SYNTH(_despread)(SYNTH() _q, \ + TC *_x, \ + TC *_y); \ + \ +void SYNTH(_despread_triple)(SYNTH() _q, \ + TC *_x, \ + TC *_early, \ + TC *_punctual, \ + TC *_late); \ + +// Define synth APIs +LIQUID_SYNTH_DEFINE_API(SYNTH_MANGLE_FLOAT, float, liquid_float_complex) + + + +// +// MODULE : optimization +// + +// utility function pointer definition +typedef float (*utility_function)(void * _userdata, + float * _v, + unsigned int _n); + +// n-dimensional Rosenbrock utility function (minimum at _v = {1,1,1...} +// _userdata : user-defined data structure (convenience) +// _v : input vector [size: _n x 1] +// _n : input vector size +float liquid_rosenbrock(void * _userdata, + float * _v, + unsigned int _n); + +// n-dimensional inverse Gauss utility function (minimum at _v = {0,0,0...} +// _userdata : user-defined data structure (convenience) +// _v : input vector [size: _n x 1] +// _n : input vector size +float liquid_invgauss(void * _userdata, + float * _v, + unsigned int _n); + +// n-dimensional multimodal utility function (minimum at _v = {0,0,0...} +// _userdata : user-defined data structure (convenience) +// _v : input vector [size: _n x 1] +// _n : input vector size +float liquid_multimodal(void * _userdata, + float * _v, + unsigned int _n); + +// n-dimensional spiral utility function (minimum at _v = {0,0,0...} +// _userdata : user-defined data structure (convenience) +// _v : input vector [size: _n x 1] +// _n : input vector size +float liquid_spiral(void * _userdata, + float * _v, + unsigned int _n); + + +// +// Gradient search +// + +#define LIQUID_OPTIM_MINIMIZE (0) +#define LIQUID_OPTIM_MAXIMIZE (1) + +typedef struct gradsearch_s * gradsearch; + +// Create a gradient search object +// _userdata : user data object pointer +// _v : array of parameters to optimize +// _num_parameters : array length (number of parameters to optimize) +// _u : utility function pointer +// _direction : search direction (e.g. LIQUID_OPTIM_MAXIMIZE) +gradsearch gradsearch_create(void * _userdata, + float * _v, + unsigned int _num_parameters, + utility_function _utility, + int _direction); + +// Destroy a gradsearch object +void gradsearch_destroy(gradsearch _q); + +// Prints current status of search +void gradsearch_print(gradsearch _q); + +// Iterate once +float gradsearch_step(gradsearch _q); + +// Execute the search +float gradsearch_execute(gradsearch _q, + unsigned int _max_iterations, + float _target_utility); + + +// quasi-Newton search +typedef struct qnsearch_s * qnsearch; + +// Create a simple qnsearch object; parameters are specified internally +// _userdata : userdata +// _v : array of parameters to optimize +// _num_parameters : array length +// _get_utility : utility function pointer +// _direction : search direction (e.g. LIQUID_OPTIM_MAXIMIZE) +qnsearch qnsearch_create(void * _userdata, + float * _v, + unsigned int _num_parameters, + utility_function _u, + int _direction); + +// Destroy a qnsearch object +int qnsearch_destroy(qnsearch _g); + +// Prints current status of search +int qnsearch_print(qnsearch _g); + +// Resets internal state +int qnsearch_reset(qnsearch _g); + +// Iterate once +int qnsearch_step(qnsearch _g); + +// Execute the search +float qnsearch_execute(qnsearch _g, + unsigned int _max_iterations, + float _target_utility); + +// +// chromosome (for genetic algorithm search) +// +typedef struct chromosome_s * chromosome; + +// create a chromosome object, variable bits/trait +chromosome chromosome_create(unsigned int * _bits_per_trait, + unsigned int _num_traits); + +// create a chromosome object, all traits same resolution +chromosome chromosome_create_basic(unsigned int _num_traits, + unsigned int _bits_per_trait); + +// create a chromosome object, cloning a parent +chromosome chromosome_create_clone(chromosome _parent); + +// copy existing chromosomes' internal traits (all other internal +// parameters must be equal) +int chromosome_copy(chromosome _parent, chromosome _child); + +// Destroy a chromosome object +int chromosome_destroy(chromosome _c); + +// get number of traits in chromosome +unsigned int chromosome_get_num_traits(chromosome _c); + +// Print chromosome values to screen (binary representation) +int chromosome_print(chromosome _c); + +// Print chromosome values to screen (floating-point representation) +int chromosome_printf(chromosome _c); + +// clear chromosome (set traits to zero) +int chromosome_reset(chromosome _c); + +// initialize chromosome on integer values +int chromosome_init(chromosome _c, + unsigned int * _v); + +// initialize chromosome on floating-point values +int chromosome_initf(chromosome _c, float * _v); + +// Mutates chromosome _c at _index +int chromosome_mutate(chromosome _c, unsigned int _index); + +// Resulting chromosome _c is a crossover of parents _p1 and _p2 at _threshold +int chromosome_crossover(chromosome _p1, + chromosome _p2, + chromosome _c, + unsigned int _threshold); + +// Initializes chromosome to random value +int chromosome_init_random(chromosome _c); + +// Returns integer representation of chromosome +unsigned int chromosome_value(chromosome _c, + unsigned int _index); + +// Returns floating-point representation of chromosome +float chromosome_valuef(chromosome _c, + unsigned int _index); + +// +// genetic algorithm search +// +typedef struct gasearch_s * gasearch; + +typedef float (*gasearch_utility)(void * _userdata, chromosome _c); + +// Create a simple gasearch object; parameters are specified internally +// _utility : chromosome fitness utility function +// _userdata : user data, void pointer passed to _get_utility() callback +// _parent : initial population parent chromosome, governs precision, etc. +// _minmax : search direction +gasearch gasearch_create(gasearch_utility _u, + void * _userdata, + chromosome _parent, + int _minmax); + +// Create a gasearch object, specifying search parameters +// _utility : chromosome fitness utility function +// _userdata : user data, void pointer passed to _get_utility() callback +// _parent : initial population parent chromosome, governs precision, etc. +// _minmax : search direction +// _population_size : number of chromosomes in population +// _mutation_rate : probability of mutating chromosomes +gasearch gasearch_create_advanced(gasearch_utility _utility, + void * _userdata, + chromosome _parent, + int _minmax, + unsigned int _population_size, + float _mutation_rate); + + +// Destroy a gasearch object +int gasearch_destroy(gasearch _q); + +// print search parameter internals +int gasearch_print(gasearch _q); + +// set mutation rate +int gasearch_set_mutation_rate(gasearch _q, + float _mutation_rate); + +// set population/selection size +// _q : ga search object +// _population_size : new population size (number of chromosomes) +// _selection_size : selection size (number of parents for new generation) +int gasearch_set_population_size(gasearch _q, + unsigned int _population_size, + unsigned int _selection_size); + +// Execute the search +// _q : ga search object +// _max_iterations : maximum number of iterations to run before bailing +// _target_utility : target utility +float gasearch_run(gasearch _q, + unsigned int _max_iterations, + float _target_utility); + +// iterate over one evolution of the search algorithm +int gasearch_evolve(gasearch _q); + +// get optimal chromosome +// _q : ga search object +// _c : output optimal chromosome +// _utility_opt : fitness of _c +int gasearch_getopt(gasearch _q, + chromosome _c, + float * _utility_opt); + +// +// MODULE : quantization +// + +float compress_mulaw(float _x, float _mu); +float expand_mulaw(float _x, float _mu); + +int compress_cf_mulaw(liquid_float_complex _x, float _mu, liquid_float_complex * _y); +int expand_cf_mulaw(liquid_float_complex _y, float _mu, liquid_float_complex * _x); + +//float compress_alaw(float _x, float _a); +//float expand_alaw(float _x, float _a); + +// inline quantizer: 'analog' signal in [-1, 1] +unsigned int quantize_adc(float _x, unsigned int _num_bits); +float quantize_dac(unsigned int _s, unsigned int _num_bits); + +// structured quantizer + +typedef enum { + LIQUID_COMPANDER_NONE=0, + LIQUID_COMPANDER_LINEAR, + LIQUID_COMPANDER_MULAW, + LIQUID_COMPANDER_ALAW +} liquid_compander_type; + +#define LIQUID_QUANTIZER_MANGLE_FLOAT(name) LIQUID_CONCAT(quantizerf, name) +#define LIQUID_QUANTIZER_MANGLE_CFLOAT(name) LIQUID_CONCAT(quantizercf, name) + +// large macro +// QUANTIZER : name-mangling macro +// T : data type +#define LIQUID_QUANTIZER_DEFINE_API(QUANTIZER,T) \ + \ +/* Amplitude quantization object */ \ +typedef struct QUANTIZER(_s) * QUANTIZER(); \ + \ +/* Create quantizer object given compander type, input range, and the */ \ +/* number of bits to represent the output */ \ +/* _ctype : compander type (linear, mulaw, alaw) */ \ +/* _range : maximum abosolute input range (ignored for now) */ \ +/* _num_bits : number of bits per sample */ \ +QUANTIZER() QUANTIZER(_create)(liquid_compander_type _ctype, \ + float _range, \ + unsigned int _num_bits); \ + \ +/* Destroy object, freeing all internally-allocated memory. */ \ +int QUANTIZER(_destroy)(QUANTIZER() _q); \ + \ +/* Print object properties to stdout, including compander type and */ \ +/* number of bits per sample */ \ +int QUANTIZER(_print)(QUANTIZER() _q); \ + \ +/* Execute quantizer as analog-to-digital converter, accepting input */ \ +/* sample and returning digitized output bits */ \ +/* _q : quantizer object */ \ +/* _x : input sample */ \ +/* _s : output bits */ \ +int QUANTIZER(_execute_adc)(QUANTIZER() _q, \ + T _x, \ + unsigned int * _s); \ + \ +/* Execute quantizer as digital-to-analog converter, accepting input */ \ +/* bits and returning representation of original input sample */ \ +/* _q : quantizer object */ \ +/* _s : input bits */ \ +/* _x : output sample */ \ +int QUANTIZER(_execute_dac)(QUANTIZER() _q, \ + unsigned int _s, \ + T * _x); \ + +LIQUID_QUANTIZER_DEFINE_API(LIQUID_QUANTIZER_MANGLE_FLOAT, float) +LIQUID_QUANTIZER_DEFINE_API(LIQUID_QUANTIZER_MANGLE_CFLOAT, liquid_float_complex) + + +// +// MODULE : random (number generators) +// + + +// Uniform random number generator, [0,1) +float randf(); +float randf_pdf(float _x); +float randf_cdf(float _x); + +// Uniform random number generator with arbitrary bounds, [a,b) +float randuf(float _a, float _b); +float randuf_pdf(float _x, float _a, float _b); +float randuf_cdf(float _x, float _a, float _b); + +// Gauss random number generator, N(0,1) +// f(x) = 1/sqrt(2*pi*sigma^2) * exp{-(x-eta)^2/(2*sigma^2)} +// +// where +// eta = mean +// sigma = standard deviation +// +float randnf(); +void awgn(float *_x, float _nstd); +void crandnf(liquid_float_complex *_y); +void cawgn(liquid_float_complex *_x, float _nstd); +float randnf_pdf(float _x, float _eta, float _sig); +float randnf_cdf(float _x, float _eta, float _sig); + +// Exponential +// f(x) = lambda exp{ -lambda x } +// where +// lambda = spread parameter, lambda > 0 +// x >= 0 +float randexpf(float _lambda); +float randexpf_pdf(float _x, float _lambda); +float randexpf_cdf(float _x, float _lambda); + +// Weibull +// f(x) = (a/b) (x/b)^(a-1) exp{ -(x/b)^a } +// where +// a = alpha : shape parameter +// b = beta : scaling parameter +// g = gamma : location (threshold) parameter +// +float randweibf(float _alpha, float _beta, float _gamma); +float randweibf_pdf(float _x, float _a, float _b, float _g); +float randweibf_cdf(float _x, float _a, float _b, float _g); + +// Gamma +// x^(a-1) exp(-x/b) +// f(x) = ------------------- +// Gamma(a) b^a +// where +// a = alpha : shape parameter, a > 0 +// b = beta : scale parameter, b > 0 +// Gamma(z) = regular gamma function +// x >= 0 +float randgammaf(float _alpha, float _beta); +float randgammaf_pdf(float _x, float _alpha, float _beta); +float randgammaf_cdf(float _x, float _alpha, float _beta); + +// Nakagami-m +// f(x) = (2/Gamma(m)) (m/omega)^m x^(2m-1) exp{-(m/omega)x^2} +// where +// m : shape parameter, m >= 0.5 +// omega : spread parameter, omega > 0 +// Gamma(z): regular complete gamma function +// x >= 0 +float randnakmf(float _m, float _omega); +float randnakmf_pdf(float _x, float _m, float _omega); +float randnakmf_cdf(float _x, float _m, float _omega); + +// Rice-K +// f(x) = (x/sigma^2) exp{ -(x^2+s^2)/(2sigma^2) } I0( x s / sigma^2 ) +// where +// s = sqrt( omega*K/(K+1) ) +// sigma = sqrt(0.5 omega/(K+1)) +// and +// K = shape parameter +// omega = spread parameter +// I0 = modified Bessel function of the first kind +// x >= 0 +float randricekf(float _K, float _omega); +float randricekf_cdf(float _x, float _K, float _omega); +float randricekf_pdf(float _x, float _K, float _omega); + + +// Data scrambler : whiten data sequence +void scramble_data(unsigned char * _x, unsigned int _len); +void unscramble_data(unsigned char * _x, unsigned int _len); +void unscramble_data_soft(unsigned char * _x, unsigned int _len); + +// +// MODULE : sequence +// + +// Binary sequence (generic) + +typedef struct bsequence_s * bsequence; + +// Create a binary sequence of a specific length (number of bits) +bsequence bsequence_create(unsigned int num_bits); + +// Free memory in a binary sequence +int bsequence_destroy(bsequence _bs); + +// Clear binary sequence (set to 0's) +int bsequence_reset(bsequence _bs); + +// initialize sequence on external array +int bsequence_init(bsequence _bs, + unsigned char * _v); + +// Print sequence to the screen +int bsequence_print(bsequence _bs); + +// Push bit into to back of a binary sequence +int bsequence_push(bsequence _bs, + unsigned int _bit); + +// circular shift (left) +int bsequence_circshift(bsequence _bs); + +// Correlate two binary sequences together +int bsequence_correlate(bsequence _bs1, bsequence _bs2); + +// compute the binary addition of two bit sequences +int bsequence_add(bsequence _bs1, bsequence _bs2, bsequence _bs3); + +// compute the binary multiplication of two bit sequences +int bsequence_mul(bsequence _bs1, bsequence _bs2, bsequence _bs3); + +// accumulate the 1's in a binary sequence +unsigned int bsequence_accumulate(bsequence _bs); + +// accessor functions +unsigned int bsequence_get_length(bsequence _bs); +unsigned int bsequence_index(bsequence _bs, unsigned int _i); + +// Complementary codes + +// intialize two sequences to complementary codes. sequences must +// be of length at least 8 and a power of 2 (e.g. 8, 16, 32, 64,...) +// _a : sequence 'a' (bsequence object) +// _b : sequence 'b' (bsequence object) +int bsequence_create_ccodes(bsequence _a, bsequence _b); + + +// M-Sequence + +#define LIQUID_MAX_MSEQUENCE_LENGTH 32767 + +// default m-sequence generators: g (hex) m n g (oct) g (binary) +#define LIQUID_MSEQUENCE_GENPOLY_M2 0x0007 // 2 3 7 111 +#define LIQUID_MSEQUENCE_GENPOLY_M3 0x000B // 3 7 13 1011 +#define LIQUID_MSEQUENCE_GENPOLY_M4 0x0013 // 4 15 23 10011 +#define LIQUID_MSEQUENCE_GENPOLY_M5 0x0025 // 5 31 45 100101 +#define LIQUID_MSEQUENCE_GENPOLY_M6 0x0043 // 6 63 103 1000011 +#define LIQUID_MSEQUENCE_GENPOLY_M7 0x0089 // 7 127 211 10001001 +#define LIQUID_MSEQUENCE_GENPOLY_M8 0x011D // 8 255 435 100101101 +#define LIQUID_MSEQUENCE_GENPOLY_M9 0x0211 // 9 511 1021 1000010001 +#define LIQUID_MSEQUENCE_GENPOLY_M10 0x0409 // 10 1023 2011 10000001001 +#define LIQUID_MSEQUENCE_GENPOLY_M11 0x0805 // 11 2047 4005 100000000101 +#define LIQUID_MSEQUENCE_GENPOLY_M12 0x1053 // 12 4095 10123 1000001010011 +#define LIQUID_MSEQUENCE_GENPOLY_M13 0x201b // 13 8191 20033 10000000011011 +#define LIQUID_MSEQUENCE_GENPOLY_M14 0x402b // 14 16383 40053 100000000101011 +#define LIQUID_MSEQUENCE_GENPOLY_M15 0x8003 // 15 32767 100003 1000000000000011 + +typedef struct msequence_s * msequence; + +// create a maximal-length sequence (m-sequence) object with +// an internal shift register length of _m bits. +// _m : generator polynomial length, sequence length is (2^m)-1 +// _g : generator polynomial, starting with most-significant bit +// _a : initial shift register state, default: 000...001 +msequence msequence_create(unsigned int _m, + unsigned int _g, + unsigned int _a); + +// create a maximal-length sequence (m-sequence) object from a generator polynomial +msequence msequence_create_genpoly(unsigned int _g); + +// creates a default maximal-length sequence +msequence msequence_create_default(unsigned int _m); + +// destroy an msequence object, freeing all internal memory +int msequence_destroy(msequence _m); + +// prints the sequence's internal state to the screen +int msequence_print(msequence _m); + +// advance msequence on shift register, returning output bit +unsigned int msequence_advance(msequence _ms); + +// generate pseudo-random symbol from shift register by +// advancing _bps bits and returning compacted symbol +// _ms : m-sequence object +// _bps : bits per symbol of output +unsigned int msequence_generate_symbol(msequence _ms, + unsigned int _bps); + +// reset msequence shift register to original state, typically '1' +int msequence_reset(msequence _ms); + +// initialize a bsequence object on an msequence object +// _bs : bsequence object +// _ms : msequence object +int bsequence_init_msequence(bsequence _bs, + msequence _ms); + +// get the length of the sequence +unsigned int msequence_get_length(msequence _ms); + +// get the internal state of the sequence +unsigned int msequence_get_state(msequence _ms); + +// set the internal state of the sequence +int msequence_set_state(msequence _ms, + unsigned int _a); + + +// +// MODULE : utility +// + +// pack binary array with symbol(s) +// _src : source array [size: _n x 1] +// _n : input source array length +// _k : bit index to write in _src +// _b : number of bits in input symbol +// _sym_in : input symbol +int liquid_pack_array(unsigned char * _src, + unsigned int _n, + unsigned int _k, + unsigned int _b, + unsigned char _sym_in); + +// unpack symbols from binary array +// _src : source array [size: _n x 1] +// _n : input source array length +// _k : bit index to write in _src +// _b : number of bits in output symbol +// _sym_out : output symbol +int liquid_unpack_array(unsigned char * _src, + unsigned int _n, + unsigned int _k, + unsigned int _b, + unsigned char * _sym_out); + +// pack one-bit symbols into bytes (8-bit symbols) +// _sym_in : input symbols array [size: _sym_in_len x 1] +// _sym_in_len : number of input symbols +// _sym_out : output symbols +// _sym_out_len : number of bytes allocated to output symbols array +// _num_written : number of output symbols actually written +int liquid_pack_bytes(unsigned char * _sym_in, + unsigned int _sym_in_len, + unsigned char * _sym_out, + unsigned int _sym_out_len, + unsigned int * _num_written); + +// unpack 8-bit symbols (full bytes) into one-bit symbols +// _sym_in : input symbols array [size: _sym_in_len x 1] +// _sym_in_len : number of input symbols +// _sym_out : output symbols array +// _sym_out_len : number of bytes allocated to output symbols array +// _num_written : number of output symbols actually written +int liquid_unpack_bytes(unsigned char * _sym_in, + unsigned int _sym_in_len, + unsigned char * _sym_out, + unsigned int _sym_out_len, + unsigned int * _num_written); + +// repack bytes with arbitrary symbol sizes +// _sym_in : input symbols array [size: _sym_in_len x 1] +// _sym_in_bps : number of bits per input symbol +// _sym_in_len : number of input symbols +// _sym_out : output symbols array +// _sym_out_bps : number of bits per output symbol +// _sym_out_len : number of bytes allocated to output symbols array +// _num_written : number of output symbols actually written +int liquid_repack_bytes(unsigned char * _sym_in, + unsigned int _sym_in_bps, + unsigned int _sym_in_len, + unsigned char * _sym_out, + unsigned int _sym_out_bps, + unsigned int _sym_out_len, + unsigned int * _num_written); + +// shift array to the left _b bits, filling in zeros +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bits to shift +int liquid_lbshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// shift array to the right _b bits, filling in zeros +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bits to shift +int liquid_rbshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// circularly shift array to the left _b bits +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bits to shift +int liquid_lbcircshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// circularly shift array to the right _b bits +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bits to shift +int liquid_rbcircshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// shift array to the left _b bytes, filling in zeros +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bytes to shift +int liquid_lshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// shift array to the right _b bytes, filling in zeros +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bytes to shift +int liquid_rshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// circular shift array to the left _b bytes +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bytes to shift +int liquid_lcircshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// circular shift array to the right _b bytes +// _src : source address [size: _n x 1] +// _n : input data array size +// _b : number of bytes to shift +int liquid_rcircshift(unsigned char * _src, + unsigned int _n, + unsigned int _b); + +// Count the number of ones in an integer +unsigned int liquid_count_ones(unsigned int _x); + +// count number of ones in an integer, modulo 2 +unsigned int liquid_count_ones_mod2(unsigned int _x); + +// compute bindary dot-product between two integers +unsigned int liquid_bdotprod(unsigned int _x, + unsigned int _y); + +// Count leading zeros in an integer +unsigned int liquid_count_leading_zeros(unsigned int _x); + +// Most-significant bit index +unsigned int liquid_msb_index(unsigned int _x); + +// Print string of bits to stdout +int liquid_print_bitstring(unsigned int _x, unsigned int _n); + +// reverse byte, word, etc. +unsigned char liquid_reverse_byte( unsigned char _x); +unsigned int liquid_reverse_uint16(unsigned int _x); +unsigned int liquid_reverse_uint24(unsigned int _x); +unsigned int liquid_reverse_uint32(unsigned int _x); + +// get scale for constant, particularly for plotting purposes +// _val : input value (e.g. 100e6) +// _unit : output unit character (e.g. 'M') +// _scale : output scale (e.g. 1e-6) +int liquid_get_scale(float _val, + char * _unit, + float * _scale); + +// +// MODULE : vector +// + +#define LIQUID_VECTOR_MANGLE_RF(name) LIQUID_CONCAT(liquid_vectorf, name) +#define LIQUID_VECTOR_MANGLE_CF(name) LIQUID_CONCAT(liquid_vectorcf,name) + +// large macro +// VECTOR : name-mangling macro +// T : data type +// TP : data type (primitive) +#define LIQUID_VECTOR_DEFINE_API(VECTOR,T,TP) \ + \ +/* Initialize vector with scalar: x[i] = c (scalar) */ \ +void VECTOR(_init)(T _c, \ + T * _x, \ + unsigned int _n); \ + \ +/* Add each element pointwise: z[i] = x[i] + y[i] */ \ +void VECTOR(_add)(T * _x, \ + T * _y, \ + unsigned int _n, \ + T * _z); \ + \ +/* Add scalar to each element: y[i] = x[i] + c */ \ +void VECTOR(_addscalar)(T * _x, \ + unsigned int _n, \ + T _c, \ + T * _y); \ + \ +/* Multiply each element pointwise: z[i] = x[i] * y[i] */ \ +void VECTOR(_mul)(T * _x, \ + T * _y, \ + unsigned int _n, \ + T * _z); \ + \ +/* Multiply each element with scalar: y[i] = x[i] * c */ \ +void VECTOR(_mulscalar)(T * _x, \ + unsigned int _n, \ + T _c, \ + T * _y); \ + \ +/* Compute complex phase rotation: x[i] = exp{j theta[i]} */ \ +void VECTOR(_cexpj)(TP * _theta, \ + unsigned int _n, \ + T * _x); \ + \ +/* Compute angle of each element: theta[i] = arg{ x[i] } */ \ +void VECTOR(_carg)(T * _x, \ + unsigned int _n, \ + TP * _theta); \ + \ +/* Compute absolute value of each element: y[i] = |x[i]| */ \ +void VECTOR(_abs)(T * _x, \ + unsigned int _n, \ + TP * _y); \ + \ +/* Compute sum of squares: sum{ |x|^2 } */ \ +TP VECTOR(_sumsq)(T * _x, \ + unsigned int _n); \ + \ +/* Compute l-2 norm: sqrt{ sum{ |x|^2 } } */ \ +TP VECTOR(_norm)(T * _x, \ + unsigned int _n); \ + \ +/* Compute l-p norm: { sum{ |x|^p } }^(1/p) */ \ +TP VECTOR(_pnorm)(T * _x, \ + unsigned int _n, \ + TP _p); \ + \ +/* Scale vector elements by l-2 norm: y[i] = x[i]/norm(x) */ \ +void VECTOR(_normalize)(T * _x, \ + unsigned int _n, \ + T * _y); \ + +LIQUID_VECTOR_DEFINE_API(LIQUID_VECTOR_MANGLE_RF, float, float) +LIQUID_VECTOR_DEFINE_API(LIQUID_VECTOR_MANGLE_CF, liquid_float_complex, float) + +// +// mixed types +// +#if 0 +void liquid_vectorf_add(float * _a, + float * _b, + unsigned int _n, + float * _c); +#endif + +#ifdef __cplusplus +} //extern "C" +#endif // __cplusplus + +#endif // __LIQUID_H__ + diff --git a/hsmodem/liquid_if.cpp b/hsmodem/liquid_if.cpp new file mode 100755 index 0000000..95870ce --- /dev/null +++ b/hsmodem/liquid_if.cpp @@ -0,0 +1,385 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +* liquid_if.c ... functions using liquid-dsp +* +* liquid-dsp must be previously installed by running ./liquid-dsp-install (under linux) +* +*/ + +#include "hsmodem.h" + +void modulator(uint8_t sym_in); +void init_demodulator(); +void close_demodulator(); +void init_modulator(); +void close_modulator(); + +void init_dsp() +{ + init_modulator(); + pb_write_fifo_clear(); + init_demodulator(); +} + +void close_dsp() +{ + close_modulator(); + close_demodulator(); +} + +modulation_scheme getMod() +{ + if(bitsPerSymbol == 2) + return LIQUID_MODEM_QPSK; + //return LIQUID_MODEM_APSK4; + else + return LIQUID_MODEM_APSK8; +} + +// =========== MODULATOR ================================================== + +// modem objects +modem mod = NULL; + +// NCOs for mixing baseband <-> 1500 Hz +#define FREQUENCY 1500 +int type = LIQUID_NCO; // nco type +nco_crcf upnco = NULL; + +// TX-Interpolator Filter Parameters +// 44100 input rate for 2205 Sym/s = 20 +// change for other rates +firinterp_crcf TX_interpolator = NULL; +unsigned int k_SampPerSymb = 20; // 44100 / (4410/2) +unsigned int m_filterDelay_Symbols = 15; // not too short for good filter +float beta_excessBW = 0.2f; // filter excess bandwidth factor +float tau_FracSymbOffset = -0.2f; // fractional symbol offset + +void init_modulator() +{ + close_dsp(); + printf("init TX modulator\n"); + + k_SampPerSymb = txinterpolfactor; + + // create modulator + mod = modem_create(getMod()); + + // create NCO for upmixing to 1500 Hz + float RADIANS_PER_SAMPLE = ((2.0f*(float)M_PI*(float)FREQUENCY)/(float)caprate); + + upnco = nco_crcf_create(LIQUID_NCO); + nco_crcf_set_phase(upnco, 0.0f); + nco_crcf_set_frequency(upnco, RADIANS_PER_SAMPLE); + + // TX: Interpolator Filter + // compute delay + while (tau_FracSymbOffset < 0) tau_FracSymbOffset += 1.0f; // ensure positive tau + float g = k_SampPerSymb*tau_FracSymbOffset; // number of samples offset + int ds=(int)floorf(g); // additional symbol delay + float dt = (g - (float)ds); // fractional sample offset + // force dt to be in [0.5,0.5] + if (dt > 0.5f) + { + dt -= 1.0f; + ds++; + } + + // calculate filter coeffs + unsigned int h_len_NumFilterCoeefs = 2 * k_SampPerSymb * m_filterDelay_Symbols + 1; + float h[1000]; + if (h_len_NumFilterCoeefs >= 1000) + { + printf("h in h_len_NumFilterCoeefs too small\n"); + return; + } + liquid_firdes_prototype( LIQUID_FIRFILT_RRC, + k_SampPerSymb, + m_filterDelay_Symbols, + beta_excessBW, + dt, + h); + // create the filter + TX_interpolator = firinterp_crcf_create(k_SampPerSymb,h,h_len_NumFilterCoeefs); + + printf("DSP created\n"); + return; +} + +void close_modulator() +{ + if(mod != NULL) modem_destroy(mod); + if(upnco != NULL) nco_crcf_destroy(upnco); + if(TX_interpolator != NULL) firinterp_crcf_destroy(TX_interpolator); + mod = NULL; + upnco = NULL; + TX_interpolator = NULL; +} + +// d ... symbols to send +// len ... number of symbols in d +void sendToModulator(uint8_t *d, int len) +{ + if(upnco == NULL) return; + + int symanz = len * 8 / bitsPerSymbol; + uint8_t syms[10000]; + if (symanz >= 10000) + { + printf("syms in symanz too small\n"); + return; + } + if(bitsPerSymbol == 2) + convertBytesToSyms_QPSK(d, syms, len); + else + convertBytesToSyms_8PSK(d, syms, len); + + for(int i=0; i>1); + + modulator(syms[i]); + } +} + +// call for every symbol +// modulates, filters and upmixes symbols and send it to soundcard +void modulator(uint8_t sym_in) +{ + liquid_float_complex sample; + modem_modulate(mod, sym_in, &sample); + + //printf("TX ================= sample: %f + i%f\n", sample.real, sample.imag); + + // interpolate by k_SampPerSymb + liquid_float_complex y[100]; + if (k_SampPerSymb >= 100) + { + printf("y in k_SampPerSymb too small\n"); + return; + } + + firinterp_crcf_execute(TX_interpolator, sample, y); + + for(unsigned int i=0; i (10000 * 2 + 1)) + { + printf("GRdata_FFTdata: txpl too small !!!\n"); + return; + } + + int bidx = 0; + txpl[bidx++] = 4; // type 4: FFT data follows + + for (int i = 0; i < fftlen; i++) + { + txpl[bidx++] = fft[i] >> 8; + txpl[bidx++] = fft[i] & 0xff; + } + sendUDP(appIP, UdpDataPort_ModemToApp, txpl, bidx); + } +} + + +int demodulator() +{ +static liquid_float_complex ccol[100]; +static int ccol_idx = 0; + + if(dnnco == NULL) return 0; + + // get one received sample + float f; + int ret = cap_read_fifo(&f); + if(ret == 0) return 0; + + make_FFTdata(f*120); + + // downconvert into baseband + // still at soundcard sample rate + nco_crcf_step(dnnco); + + liquid_float_complex in; + in.real = f; + in.imag = f; + liquid_float_complex c; + nco_crcf_mix_down(dnnco,in,&c); + + // c is the actual sample, converted to complex and shifted to baseband + + // this is the first decimator. We need to collect rxPreInterpolfactor number of samples + // then call execute which will give us one decimated sample + ccol[ccol_idx++] = c; + if(ccol_idx < rxPreInterpolfactor) return 1; + ccol_idx = 0; + + // we have rxPreInterpolfactor samples in ccol + liquid_float_complex y; + firdecim_crcf_execute(decim, ccol, &y); + + unsigned int num_symbols_sync; + liquid_float_complex syms; + symtrack_cccf_execute(symtrack, y, &syms, &num_symbols_sync); + + if(num_symbols_sync > 1) printf("symtrack_cccf_execute %d output symbols ???\n",num_symbols_sync); + if(num_symbols_sync != 0) + { + unsigned int sym_out; // output symbol + modem_demodulate(demod, syms, &sym_out); + + measure_speed(1); + + // try to extract a complete frame + uint8_t symb = sym_out; + if(bitsPerSymbol == 2) symb ^= (symb>>1); + GRdata_rxdata(&symb, 1, NULL); + + // send the data "as is" to app for Constellation Diagram + // we have about 2000 S/s, but this many points would make the GUI slow + // so we send only every x + static int ev = 0; + //if (++ev >= 2) + { + ev = 0; + uint32_t re = (uint32_t)(syms.real * 16777216.0); + uint32_t im = (uint32_t)(syms.imag * 16777216.0); + uint8_t txpl[13]; + int idx = 0; + txpl[idx++] = 5; // type 5: IQ data follows + uint32_t sy = 0x3e8; + txpl[idx++] = sy >> 24; + txpl[idx++] = sy >> 16; + txpl[idx++] = sy >> 8; + txpl[idx++] = sy; + txpl[idx++] = re >> 24; + txpl[idx++] = re >> 16; + txpl[idx++] = re >> 8; + txpl[idx++] = re; + txpl[idx++] = im >> 24; + txpl[idx++] = im >> 16; + txpl[idx++] = im >> 8; + txpl[idx++] = im; + sendUDP(appIP, UdpDataPort_ModemToApp, txpl, 13); + } + } + + return 1; +} diff --git a/hsmodem/main_helper.cpp b/hsmodem/main_helper.cpp new file mode 100755 index 0000000..667be34 --- /dev/null +++ b/hsmodem/main_helper.cpp @@ -0,0 +1,107 @@ +/* + * main_helper + * =========== + * by DJ0ABR + * + * functions useful for every main() program + * + * */ + +#include "hsmodem.h" + +#ifdef _LINUX_ +// check if it is already running +int isRunning(char *prgname) +{ + int num = 0; + char s[256]; + sprintf(s,"ps -e | grep %s",prgname); + + FILE *fp = popen(s,"r"); + if(fp) + { + // gets the output of the system command + while (fgets(s, sizeof(s)-1, fp) != NULL) + { + if(strstr(s,prgname) && !strstr(s,"grep")) + { + if(++num == 2) + { + printf("%s is already running, do not start twice !",prgname); + pclose(fp); + return 1; + } + } + } + pclose(fp); + } + return 0; +} + + +// signal handler +void sighandler(int signum) +{ + printf("program stopped by signal\n"); + exit_fft(); + keeprunning = 0; + close(BC_sock_AppToModem); +} + +void install_signal_handler() +{ + + // signal handler, mainly used if the user presses Ctrl-C + struct sigaction sigact; + sigact.sa_handler = sighandler; + sigemptyset(&sigact.sa_mask); + sigact.sa_flags = 0; + sigaction(SIGINT, &sigact, NULL); + sigaction(SIGTERM, &sigact, NULL); + sigaction(SIGQUIT, &sigact, NULL); + sigaction(SIGABRT, &sigact, NULL); // assert() error + + //sigaction(SIGSEGV, &sigact, NULL); + + // switch off signal 13 (broken pipe) + // instead handle the return value of the write or send function + signal(SIGPIPE, SIG_IGN); +} +#endif // _LINUX_ + +void showbytestring(char *title, uint8_t *data, int anz) +{ + printf("%s. Len %d: ",title,anz); + for(int i=0; i 400) return; + + for(int i=0; i 400) return data; + + memcpy(rx_scrbuf,data,len); + + for(int i=0; i0; i--) + spdarr[i] = spdarr[i-1]; + spdarr[0] = v; + + int ssum=0; + int cnt = 0; + for(int i=0; i= MAXUDPTHREADS) + { + printf("max number of UDP threads\n"); + exit(0); + } + + rxcfg[rxcfg_idx].sock = sock; + rxcfg[rxcfg_idx].port = port; + rxcfg[rxcfg_idx].rxfunc = rxfunc; + rxcfg[rxcfg_idx].keeprunning = keeprunning; + + // bind port + struct sockaddr_in sin; + +#ifdef _WIN32_ + WSADATA wsaData = { 0 }; + int ires = WSAStartup(MAKEWORD(2, 2), &wsaData); + if (ires != 0) + printf("WSAStartup failed: %d\n", ires); +#endif + + *sock = socket(PF_INET, SOCK_DGRAM, 0); + if (*sock == -1){ + printf("Failed to create Socket\n"); + exit(0); + } + + char enable = 1; + setsockopt(*sock, SOL_SOCKET, SO_REUSEADDR, &enable, sizeof(int)); + + memset(&sin, 0, sizeof(struct sockaddr_in)); + sin.sin_family = AF_INET; + sin.sin_port = htons(port); + sin.sin_addr.s_addr = INADDR_ANY; + + if (bind(*sock, (struct sockaddr *)&sin, sizeof(struct sockaddr_in)) != 0) + { + printf("Failed to bind socket, port:%d\n",port); +#ifdef _LINUX_ + close(*sock); +#endif +#ifdef _WIN32_ + closesocket(*sock); +#endif + exit(0); + } + + printf("port %d sucessfully bound\n", port); + + // port sucessfully bound + // create the receive thread +#ifdef _LINUX_ + pthread_t rxthread; + pthread_create(&rxthread, NULL, threadfunction, &(rxcfg[rxcfg_idx])); +#endif +#ifdef _WIN32_ + _beginthread(threadfunction, 0, &(rxcfg[rxcfg_idx])); +#endif + rxcfg_idx++; +} + + +#ifdef _LINUX_ +void* threadfunction(void* param) { + socklen_t fromlen; +#endif + +#ifdef _WIN32_ +void threadfunction(void* param) { + int fromlen; +#endif + RXCFG rxcfg; + memcpy((uint8_t *)(&rxcfg), (uint8_t *)param, sizeof(RXCFG)); + int recvlen; + char rxbuf[256]; + struct sockaddr_in fromSock; + fromlen = sizeof(struct sockaddr_in); + while(*rxcfg.keeprunning) + { + recvlen = recvfrom(*rxcfg.sock, rxbuf, 256, 0, (struct sockaddr *)&fromSock, &fromlen); + if (recvlen > 0) + { + // data received, send it to callback function + (*rxcfg.rxfunc)((uint8_t *)rxbuf,recvlen, &fromSock); + } + + } +#ifdef _LINUX_ + return NULL; +#endif +} + +// send UDP message +void sendUDP(char *destIP, int destPort, uint8_t *pdata, int len) +{ + int sockfd; + struct sockaddr_in servaddr; + //printf("%d %d %02X\n",destPort,len,pdata[0]); + + // Creating socket file descriptor + if ( (sockfd = socket(AF_INET, SOCK_DGRAM, 0)) < 0 ) { + printf("sendUDP: socket creation failed\n"); + exit(0); + } + memset(&servaddr, 0, sizeof(servaddr)); + // Filling server information + servaddr.sin_family = AF_INET; + servaddr.sin_port = htons(destPort); + //printf("Send to <%s><%d> Len:%d\n",destIP,destPort,len); + servaddr.sin_addr.s_addr=inet_addr(destIP); + sendto(sockfd, (char *)pdata, len, 0, (const struct sockaddr *) &servaddr, sizeof(servaddr)); +#ifdef _LINUX_ + close(sockfd); +#endif +#ifdef _WIN32_ + closesocket(sockfd); +#endif +} + diff --git a/hsmodem/udp.h b/hsmodem/udp.h new file mode 100644 index 0000000..33163fe --- /dev/null +++ b/hsmodem/udp.h @@ -0,0 +1,9 @@ +void UdpRxInit(int *sock, int port, void (*rxfunc)(uint8_t *, int, struct sockaddr_in*), int *keeprunning); +void sendUDP(char *destIP, int destPort, uint8_t *pdata, int len); + +typedef struct { + int *sock; + int port; + void (*rxfunc)(uint8_t *, int, struct sockaddr_in*); + int *keeprunning; +} RXCFG; diff --git a/oscardata/.vs/oscardata/v16/.suo b/oscardata/.vs/oscardata/v16/.suo index 17913e6..41a9b17 100755 Binary files a/oscardata/.vs/oscardata/v16/.suo and b/oscardata/.vs/oscardata/v16/.suo differ diff --git a/oscardata/oscardata/Form1.Designer.cs b/oscardata/oscardata/Form1.Designer.cs index dc2eafb..dfe0e22 100755 --- a/oscardata/oscardata/Form1.Designer.cs +++ b/oscardata/oscardata/Form1.Designer.cs @@ -71,7 +71,14 @@ this.bt_file_html = new System.Windows.Forms.Button(); this.bt_file_ascii = new System.Windows.Forms.Button(); this.tabPage5 = new System.Windows.Forms.TabPage(); - this.textBox1 = new System.Windows.Forms.TextBox(); + this.textBox3 = new System.Windows.Forms.TextBox(); + this.textBox2 = new System.Windows.Forms.TextBox(); + this.label4 = new System.Windows.Forms.Label(); + this.cb_audioCAP = new System.Windows.Forms.ComboBox(); + this.label3 = new System.Windows.Forms.Label(); + this.cb_audioPB = new System.Windows.Forms.ComboBox(); + this.bt_resetmodem = new System.Windows.Forms.Button(); + this.tb_shutdown = new System.Windows.Forms.TextBox(); this.bt_shutdown = new System.Windows.Forms.Button(); this.cb_savegoodfiles = new System.Windows.Forms.CheckBox(); this.cb_stampcall = new System.Windows.Forms.CheckBox(); @@ -80,6 +87,9 @@ this.cb_speed = new System.Windows.Forms.ComboBox(); this.label_speed = new System.Windows.Forms.Label(); this.timer_searchmodem = new System.Windows.Forms.Timer(this.components); + this.groupBox2 = new System.Windows.Forms.GroupBox(); + this.groupBox3 = new System.Windows.Forms.GroupBox(); + this.groupBox4 = new System.Windows.Forms.GroupBox(); this.statusStrip1.SuspendLayout(); this.tabPage1.SuspendLayout(); this.tabPage2.SuspendLayout(); @@ -89,6 +99,9 @@ this.tabControl1.SuspendLayout(); this.tabPage3.SuspendLayout(); this.tabPage5.SuspendLayout(); + this.groupBox2.SuspendLayout(); + this.groupBox3.SuspendLayout(); + this.groupBox4.SuspendLayout(); this.SuspendLayout(); // // timer_udpTX @@ -498,12 +511,9 @@ // // tabPage5 // - this.tabPage5.Controls.Add(this.textBox1); - this.tabPage5.Controls.Add(this.bt_shutdown); - this.tabPage5.Controls.Add(this.cb_savegoodfiles); - this.tabPage5.Controls.Add(this.cb_stampcall); - this.tabPage5.Controls.Add(this.tb_callsign); - this.tabPage5.Controls.Add(this.label1); + this.tabPage5.Controls.Add(this.groupBox4); + this.tabPage5.Controls.Add(this.groupBox3); + this.tabPage5.Controls.Add(this.groupBox2); this.tabPage5.Location = new System.Drawing.Point(4, 22); this.tabPage5.Name = "tabPage5"; this.tabPage5.Size = new System.Drawing.Size(1291, 553); @@ -511,22 +521,91 @@ this.tabPage5.Text = "Setup"; this.tabPage5.UseVisualStyleBackColor = true; // - // textBox1 + // textBox3 // - this.textBox1.BorderStyle = System.Windows.Forms.BorderStyle.None; - this.textBox1.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); - this.textBox1.ForeColor = System.Drawing.Color.Red; - this.textBox1.Location = new System.Drawing.Point(379, 78); - this.textBox1.Multiline = true; - this.textBox1.Name = "textBox1"; - this.textBox1.Size = new System.Drawing.Size(259, 55); - this.textBox1.TabIndex = 5; - this.textBox1.Text = "before switching off the modem SBC\r\nclick here to avoid defective SD-cards.\r\nWAIT" + - " 1 minute before powering OFF the modem."; + this.textBox3.BorderStyle = System.Windows.Forms.BorderStyle.None; + this.textBox3.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.textBox3.ForeColor = System.Drawing.Color.Black; + this.textBox3.Location = new System.Drawing.Point(138, 73); + this.textBox3.Multiline = true; + this.textBox3.Name = "textBox3"; + this.textBox3.Size = new System.Drawing.Size(177, 19); + this.textBox3.TabIndex = 12; + this.textBox3.Text = "(HDMI is usually not used)"; + // + // textBox2 + // + this.textBox2.BorderStyle = System.Windows.Forms.BorderStyle.None; + this.textBox2.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.textBox2.ForeColor = System.Drawing.Color.Black; + this.textBox2.Location = new System.Drawing.Point(189, 48); + this.textBox2.Multiline = true; + this.textBox2.Name = "textBox2"; + this.textBox2.Size = new System.Drawing.Size(126, 50); + this.textBox2.TabIndex = 11; + this.textBox2.Text = "in case the RX has sync\r\nproblems, it can be\r\nre-initialized here."; + // + // label4 + // + this.label4.AutoSize = true; + this.label4.Location = new System.Drawing.Point(12, 50); + this.label4.Name = "label4"; + this.label4.Size = new System.Drawing.Size(112, 13); + this.label4.TabIndex = 10; + this.label4.Text = "Audio Record Device:"; + // + // cb_audioCAP + // + this.cb_audioCAP.FormattingEnabled = true; + this.cb_audioCAP.Location = new System.Drawing.Point(138, 46); + this.cb_audioCAP.Name = "cb_audioCAP"; + this.cb_audioCAP.Size = new System.Drawing.Size(230, 21); + this.cb_audioCAP.TabIndex = 9; + this.cb_audioCAP.Text = "Default"; + // + // label3 + // + this.label3.AutoSize = true; + this.label3.Location = new System.Drawing.Point(12, 23); + this.label3.Name = "label3"; + this.label3.Size = new System.Drawing.Size(121, 13); + this.label3.TabIndex = 8; + this.label3.Text = "Audio Playback Device:"; + // + // cb_audioPB + // + this.cb_audioPB.FormattingEnabled = true; + this.cb_audioPB.Location = new System.Drawing.Point(138, 19); + this.cb_audioPB.Name = "cb_audioPB"; + this.cb_audioPB.Size = new System.Drawing.Size(230, 21); + this.cb_audioPB.TabIndex = 7; + this.cb_audioPB.Text = "Default"; + // + // bt_resetmodem + // + this.bt_resetmodem.Location = new System.Drawing.Point(189, 19); + this.bt_resetmodem.Name = "bt_resetmodem"; + this.bt_resetmodem.Size = new System.Drawing.Size(117, 23); + this.bt_resetmodem.TabIndex = 6; + this.bt_resetmodem.Text = "Reset RX Modem"; + this.bt_resetmodem.UseVisualStyleBackColor = true; + this.bt_resetmodem.Click += new System.EventHandler(this.bt_resetmodem_Click); + // + // tb_shutdown + // + this.tb_shutdown.BorderStyle = System.Windows.Forms.BorderStyle.None; + this.tb_shutdown.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.tb_shutdown.ForeColor = System.Drawing.Color.Red; + this.tb_shutdown.Location = new System.Drawing.Point(17, 48); + this.tb_shutdown.Multiline = true; + this.tb_shutdown.Name = "tb_shutdown"; + this.tb_shutdown.Size = new System.Drawing.Size(155, 50); + this.tb_shutdown.TabIndex = 5; + this.tb_shutdown.Text = "before switching off the \r\nmodem SBC click here to \r\navoid defective SD-cards.\r\n"; // // bt_shutdown // - this.bt_shutdown.Location = new System.Drawing.Point(379, 49); + this.bt_shutdown.Location = new System.Drawing.Point(17, 19); this.bt_shutdown.Name = "bt_shutdown"; this.bt_shutdown.Size = new System.Drawing.Size(155, 23); this.bt_shutdown.TabIndex = 4; @@ -539,7 +618,7 @@ this.cb_savegoodfiles.AutoSize = true; this.cb_savegoodfiles.Checked = true; this.cb_savegoodfiles.CheckState = System.Windows.Forms.CheckState.Checked; - this.cb_savegoodfiles.Location = new System.Drawing.Point(106, 136); + this.cb_savegoodfiles.Location = new System.Drawing.Point(71, 90); this.cb_savegoodfiles.Name = "cb_savegoodfiles"; this.cb_savegoodfiles.Size = new System.Drawing.Size(159, 17); this.cb_savegoodfiles.TabIndex = 3; @@ -551,7 +630,7 @@ this.cb_stampcall.AutoSize = true; this.cb_stampcall.Checked = true; this.cb_stampcall.CheckState = System.Windows.Forms.CheckState.Checked; - this.cb_stampcall.Location = new System.Drawing.Point(106, 113); + this.cb_stampcall.Location = new System.Drawing.Point(71, 67); this.cb_stampcall.Name = "cb_stampcall"; this.cb_stampcall.Size = new System.Drawing.Size(146, 17); this.cb_stampcall.TabIndex = 2; @@ -561,7 +640,7 @@ // tb_callsign // this.tb_callsign.CharacterCasing = System.Windows.Forms.CharacterCasing.Upper; - this.tb_callsign.Location = new System.Drawing.Point(106, 49); + this.tb_callsign.Location = new System.Drawing.Point(71, 28); this.tb_callsign.Name = "tb_callsign"; this.tb_callsign.Size = new System.Drawing.Size(151, 20); this.tb_callsign.TabIndex = 1; @@ -569,7 +648,7 @@ // label1 // this.label1.AutoSize = true; - this.label1.Location = new System.Drawing.Point(49, 52); + this.label1.Location = new System.Drawing.Point(14, 31); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(46, 13); this.label1.TabIndex = 0; @@ -579,19 +658,21 @@ // this.cb_speed.FormattingEnabled = true; this.cb_speed.Items.AddRange(new object[] { - "3000 QPSK BW: 1800 Hz ", - "3150 QPSK BW: 1900 Hz ", - "3675 QPSK BW: 2200 Hz ", + "3000 QPSK BW: 1700 Hz ", + "3150 QPSK BW: 1800 Hz ", + "3675 QPSK BW: 2100 Hz ", "4000 QPSK BW: 2400 Hz ", - "4410 QPSK BW: 2700 Hz (default QO-100)", - "4800 QPSK BW: 2900 Hz (experimental)", + "4410 QPSK BW: 2500 Hz (QO-100)", + "4800 QPSK BW: 2700 Hz", "5500 8PSK BW: 2300 Hz", - "6000 8PSK BW: 2500 Hz (QO-100 beacon)"}); + "6000 8PSK BW: 2500 Hz (QO-100)", + "6600 8PSK BW: 2600 Hz", + "7200 8PSK BW: 2700 Hz"}); this.cb_speed.Location = new System.Drawing.Point(636, 644); this.cb_speed.Name = "cb_speed"; this.cb_speed.Size = new System.Drawing.Size(324, 21); this.cb_speed.TabIndex = 11; - this.cb_speed.Text = "4410 QPSK BW: 2700 Hz (default QO-100)"; + this.cb_speed.Text = "4410 QPSK BW: 2500 Hz (QO-100)"; this.cb_speed.SelectedIndexChanged += new System.EventHandler(this.comboBox1_SelectedIndexChanged); // // label_speed @@ -608,6 +689,46 @@ this.timer_searchmodem.Interval = 1000; this.timer_searchmodem.Tick += new System.EventHandler(this.timer_searchmodem_Tick); // + // groupBox2 + // + this.groupBox2.Controls.Add(this.tb_callsign); + this.groupBox2.Controls.Add(this.label1); + this.groupBox2.Controls.Add(this.cb_stampcall); + this.groupBox2.Controls.Add(this.cb_savegoodfiles); + this.groupBox2.Location = new System.Drawing.Point(12, 13); + this.groupBox2.Name = "groupBox2"; + this.groupBox2.Size = new System.Drawing.Size(384, 126); + this.groupBox2.TabIndex = 13; + this.groupBox2.TabStop = false; + this.groupBox2.Text = "Personal Settings"; + // + // groupBox3 + // + this.groupBox3.Controls.Add(this.cb_audioPB); + this.groupBox3.Controls.Add(this.label3); + this.groupBox3.Controls.Add(this.textBox3); + this.groupBox3.Controls.Add(this.cb_audioCAP); + this.groupBox3.Controls.Add(this.label4); + this.groupBox3.Location = new System.Drawing.Point(12, 146); + this.groupBox3.Name = "groupBox3"; + this.groupBox3.Size = new System.Drawing.Size(384, 107); + this.groupBox3.TabIndex = 14; + this.groupBox3.TabStop = false; + this.groupBox3.Text = "Transceiver Audio"; + // + // groupBox4 + // + this.groupBox4.Controls.Add(this.bt_shutdown); + this.groupBox4.Controls.Add(this.tb_shutdown); + this.groupBox4.Controls.Add(this.bt_resetmodem); + this.groupBox4.Controls.Add(this.textBox2); + this.groupBox4.Location = new System.Drawing.Point(12, 259); + this.groupBox4.Name = "groupBox4"; + this.groupBox4.Size = new System.Drawing.Size(384, 105); + this.groupBox4.TabIndex = 15; + this.groupBox4.TabStop = false; + this.groupBox4.Text = "Maintenance"; + // // Form1 // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); @@ -622,7 +743,7 @@ this.ForeColor = System.Drawing.SystemColors.ControlText; this.Icon = ((System.Drawing.Icon)(resources.GetObject("$this.Icon"))); this.Name = "Form1"; - this.Text = "QO-100 NB Transponder HS Transmission AMSAT-DL V0.1 by DJ0ABR"; + this.Text = "QO-100 NB Transponder HS Transmission AMSAT-DL V0.2 by DJ0ABR"; this.FormClosing += new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing); this.statusStrip1.ResumeLayout(false); this.statusStrip1.PerformLayout(); @@ -637,7 +758,12 @@ this.tabPage3.ResumeLayout(false); this.tabPage3.PerformLayout(); this.tabPage5.ResumeLayout(false); - this.tabPage5.PerformLayout(); + this.groupBox2.ResumeLayout(false); + this.groupBox2.PerformLayout(); + this.groupBox3.ResumeLayout(false); + this.groupBox3.PerformLayout(); + this.groupBox4.ResumeLayout(false); + this.groupBox4.PerformLayout(); this.ResumeLayout(false); this.PerformLayout(); @@ -693,8 +819,18 @@ private System.Windows.Forms.Label label1; private System.Windows.Forms.CheckBox cb_stampcall; private System.Windows.Forms.CheckBox cb_savegoodfiles; - private System.Windows.Forms.TextBox textBox1; + private System.Windows.Forms.TextBox tb_shutdown; private System.Windows.Forms.Button bt_shutdown; + private System.Windows.Forms.Button bt_resetmodem; + private System.Windows.Forms.Label label4; + private System.Windows.Forms.ComboBox cb_audioCAP; + private System.Windows.Forms.Label label3; + private System.Windows.Forms.ComboBox cb_audioPB; + private System.Windows.Forms.TextBox textBox2; + private System.Windows.Forms.TextBox textBox3; + private System.Windows.Forms.GroupBox groupBox4; + private System.Windows.Forms.GroupBox groupBox3; + private System.Windows.Forms.GroupBox groupBox2; } } diff --git a/oscardata/oscardata/Form1.cs b/oscardata/oscardata/Form1.cs index a6d11ea..f5deef3 100755 --- a/oscardata/oscardata/Form1.cs +++ b/oscardata/oscardata/Form1.cs @@ -52,7 +52,11 @@ namespace oscardata OperatingSystem osversion = System.Environment.OSVersion; statics.OSversion = osversion.Platform.ToString(); if (osversion.VersionString.Contains("indow")) + { statics.ostype = 0; + tb_shutdown.Visible = false; + bt_shutdown.Visible = false; + } else statics.ostype = 1; // Linux @@ -90,8 +94,8 @@ namespace oscardata { if (Udp.GetBufferCount() > 3) return; - Byte[] txdata = new byte[statics.PayloadLen+2]; - + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.BERtest; // BER Test Marker txdata[1] = frameinfo; @@ -173,6 +177,22 @@ namespace oscardata comboBox1_SelectedIndexChanged(null, null); // send speed to modem } } + + if (statics.GotAudioDevices == 1) + { + statics.GotAudioDevices = 2; + // populate combo boxes + foreach (String s in statics.AudioPBdevs) + { + if(s.Length > 1) + cb_audioPB.Items.Add(s); + } + foreach (String s in statics.AudioCAPdevs) + { + if (s.Length > 1) + cb_audioCAP.Items.Add(s); + } + } } private void Form1_FormClosing(object sender, FormClosingEventArgs e) @@ -188,6 +208,7 @@ namespace oscardata int speed; int tmpnum = 0; int file_lostframes = 0; + int last_fileid = 0; private void timer_udprx_Tick(object sender, EventArgs e) { while (true) @@ -246,7 +267,11 @@ namespace oscardata //Console.WriteLine("first, single"); rxdata = ArraySend.GetAndRemoveHeader(rxdata); if (rxdata == null) return; + if (last_fileid == ArraySend.FileID) return; // got first frame for this ID already + last_fileid = ArraySend.FileID; } + else + last_fileid = 0; // collect all received data into zip_RXtempfilename Byte[] ba = null; @@ -280,6 +305,11 @@ namespace oscardata // reduce for the real file length Byte[] fc = File.ReadAllBytes(statics.zip_RXtempfilename); Byte[] fdst = new byte[ArraySend.FileSize]; + if(fc.Length < ArraySend.FileSize) + { + Console.WriteLine("len=" + fc.Length + " fz=" + ArraySend.FileSize); + return; + } Array.Copy(fc, 0, fdst, 0, ArraySend.FileSize); File.WriteAllBytes(statics.zip_RXtempfilename, fdst); @@ -330,8 +360,11 @@ namespace oscardata if (minfo == statics.FirstFrame) { rxdata = ArraySend.GetAndRemoveHeader(rxdata); - if (rxdata == null) return; + if (last_fileid == ArraySend.FileID) return; // got first frame for this ID already + last_fileid = ArraySend.FileID; } + else + last_fileid = 0; Byte[] ba = null; Byte[] nba; @@ -410,8 +443,11 @@ namespace oscardata { //Console.WriteLine("first, single"); rxdata = ArraySend.GetAndRemoveHeader(rxdata); - if (rxdata == null) return; + if (last_fileid == ArraySend.FileID) return; // got first frame for this ID already + last_fileid = ArraySend.FileID; } + else + last_fileid = 0; // collect all received data into zip_RXtempfilename Byte[] ba = null; @@ -445,6 +481,12 @@ namespace oscardata // reduce for the real file length Byte[] fc = File.ReadAllBytes(statics.zip_RXtempfilename); Byte[] fdst = new byte[ArraySend.FileSize]; + if (fc.Length < ArraySend.FileSize) + { + Console.WriteLine("len=" + fc.Length + " fz=" + ArraySend.FileSize); + return; + } + Console.WriteLine("copy final binary file"); Array.Copy(fc, 0, fdst, 0, ArraySend.FileSize); File.WriteAllBytes(statics.zip_RXtempfilename, fdst); @@ -587,35 +629,24 @@ namespace oscardata private void timer_qpsk_Tick(object sender, EventArgs e) { - panel_constel.Invalidate(); + if(Udp.IQavail()) + panel_constel.Invalidate(); + panel_txspectrum.Invalidate(); } private void panel_constel_Paint(object sender, PaintEventArgs e) { - Pen pen = new Pen(Brushes.LightGray); - e.Graphics.DrawEllipse(pen, 0, 0, panel_constel.Size.Width-1, panel_constel.Size.Height-1); - e.Graphics.DrawLine(pen, panel_constel.Size.Width / 2, 0, panel_constel.Size.Width / 2, panel_constel.Size.Height); - e.Graphics.DrawLine(pen, 0, panel_constel.Size.Height / 2, panel_constel.Size.Width, panel_constel.Size.Height/2); - - while (true) + Bitmap bm = Udp.UdpBitmap(); + if (bm != null) { - qpskitem qi = Udp.UdpGetIQ(); - if (qi == null) break; + Pen pen = new Pen(Brushes.LightGray); + e.Graphics.DrawEllipse(pen, 0, 0, panel_constel.Size.Width - 1, panel_constel.Size.Height - 1); + e.Graphics.DrawLine(pen, panel_constel.Size.Width / 2, 0, panel_constel.Size.Width / 2, panel_constel.Size.Height); + e.Graphics.DrawLine(pen, 0, panel_constel.Size.Height / 2, panel_constel.Size.Width, panel_constel.Size.Height / 2); - // re and im are in the range of +/- 2^24 (16777216) - // scale it to +/- 128 - double fre = qi.re; - double fim = qi.im; - - fre = fre * panel_constel.Size.Width / 2 / 16777216.0; - fim = fim * panel_constel.Size.Width / 2 / 16777216.0; - - // scale it to the picture - int x = panel_constel.Size.Width / 2 + (int)fre - 2; - int y = panel_constel.Size.Height / 2 + (int)fim - 2; - - e.Graphics.FillEllipse(Brushes.Blue, x, y, 2, 2); + e.Graphics.DrawImage(bm, 0, 0); + bm.Dispose(); } } @@ -959,6 +990,35 @@ namespace oscardata else line += " sequence OK"; + int bits = rxframecounter * 258 * 8; + int bytes = rxframecounter * 258; + String sbit = "b"; + String sbyt = "B"; + + if (bits > 1000) + { + bits /= 1000; + sbit = "kb"; + } + if (bits > 1000) + { + bits /= 1000; + sbit = "Mb"; + } + + if (bytes > 1000) + { + bytes /= 1000; + sbyt = "kB"; + } + if (bytes > 1000) + { + bytes /= 1000; + sbyt = "MB"; + } + + line += " " + bits.ToString() + " " + sbit + " " + bytes.ToString() + " " + sbyt; + line += " BER: " + string.Format("{0:#.##E+0}", ber); // ber.ToString("E3"); line += "\r\n"; @@ -1062,16 +1122,38 @@ namespace oscardata return ip; } + Byte getPBaudioDevice() + { + String s = cb_audioPB.Text; + Byte x = (Byte)cb_audioPB.Items.IndexOf(s); + //if (s.ToUpper() == "DEFAULT") x = 255; + return x; + } + + Byte getCAPaudioDevice() + { + String s = cb_audioCAP.Text; + Byte x = (Byte)cb_audioCAP.Items.IndexOf(s); + //if (s.ToUpper() == "DEFAULT") x = 255; + return x; + } + /* * search for the modem IP: - * send a search message (2 bytes) via UDP to port UdpBCport + * send a search message via UDP to port UdpBCport * if a modem receives this message, it returns with an * UDP message to UdpBCport containing a String with it's IP address + * this message also contains the selected Audio Devices */ private void search_modem() { - Udp.UdpBCsend(new Byte[] { (Byte)0x3c }, GetMyBroadcastIP(), statics.UdpBCport_AppToModem); + Byte[] txb = new byte[3]; + txb[0] = 0x3c; // ID of this message + txb[1] = getPBaudioDevice(); + txb[2] = getCAPaudioDevice(); + + Udp.UdpBCsend(txb, GetMyBroadcastIP(), statics.UdpBCport_AppToModem); Udp.searchtimeout++; if (Udp.searchtimeout >= 3) @@ -1161,6 +1243,8 @@ namespace oscardata case 5: real_rate = 4800; break; case 6: real_rate = 5525; break; case 7: real_rate = 6000; break; + case 8: real_rate = 6615; break; + case 9: real_rate = 7200; break; } statics.setDatarate(real_rate); @@ -1211,17 +1295,6 @@ namespace oscardata } - /// - // TEST ONLY: tell modem to send a file - private void button1_Click(object sender, EventArgs e) - { - Byte[] txdata = new byte[statics.PayloadLen + 2]; - txdata[0] = (Byte)statics.AutosendFile; - - // and transmit it - Udp.UdpSend(txdata); - } - private void bt_openrxfile_Click(object sender, EventArgs e) { if (statics.ostype == 0) @@ -1288,6 +1361,8 @@ namespace oscardata cb_stampcall.Checked = (s == "1"); s = ReadString(sr); cb_savegoodfiles.Checked = (s == "1"); + cb_audioPB.Text = ReadString(sr); + cb_audioCAP.Text = ReadString(sr); } } catch @@ -1295,6 +1370,9 @@ namespace oscardata tb_callsign.Text = ""; cb_speed.Text = "4000 QPSK BW: 2400 Hz (default QO-100)"; } + + if (cb_audioPB.Text.Length <= 1) cb_audioPB.Text = "Default"; + if (cb_audioCAP.Text.Length <= 1) cb_audioCAP.Text = "Default"; } void save_Setup() @@ -1307,6 +1385,8 @@ namespace oscardata sw.WriteLine(cb_speed.Text); sw.WriteLine(cb_stampcall.Checked?"1":"0"); sw.WriteLine(cb_savegoodfiles.Checked ? "1" : "0"); + sw.WriteLine(cb_audioPB.Text); + sw.WriteLine(cb_audioCAP.Text); } } catch { } @@ -1326,5 +1406,25 @@ namespace oscardata MessageBox.Show("Please wait abt. 1 minute before powering OFF the modem", "Shut Down Modem", MessageBoxButtons.OK); } } + + /// + // TEST ONLY: tell modem to send a file + private void button1_Click(object sender, EventArgs e) + { + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.AutosendFile; + + // and transmit it + Udp.UdpSend(txdata); + } + + private void bt_resetmodem_Click(object sender, EventArgs e) + { + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.ResetModem; + + // and transmit it + Udp.UdpSend(txdata); + } } } diff --git a/oscardata/oscardata/bin/Release/oscardata.exe b/oscardata/oscardata/bin/Release/oscardata.exe index 885c268..23299c3 100755 Binary files a/oscardata/oscardata/bin/Release/oscardata.exe and b/oscardata/oscardata/bin/Release/oscardata.exe differ diff --git a/oscardata/oscardata/config.cs b/oscardata/oscardata/config.cs index ad94c13..6bfb5c0 100755 --- a/oscardata/oscardata/config.cs +++ b/oscardata/oscardata/config.cs @@ -25,6 +25,7 @@ namespace oscardata public static Byte AutosendFile = 17; public static Byte AutosendFolder = 18; public static Byte Modem_shutdown = 19; + public static Byte ResetModem = 20; // frame sequence, modem needs that for i.e. sending a preamble public static Byte FirstFrame = 0; @@ -52,6 +53,9 @@ namespace oscardata public static String RXimageStorage = "RXimages"; public static String OSversion = ""; public static int ostype = 0; // 0=Windows, 1=Linux + public static int GotAudioDevices = 0; + public static String[] AudioPBdevs; + public static String[] AudioCAPdevs; public static void setDatarate(int rate) { diff --git a/oscardata/oscardata/udp.cs b/oscardata/oscardata/udp.cs index 1a566b3..dffafca 100755 --- a/oscardata/oscardata/udp.cs +++ b/oscardata/oscardata/udp.cs @@ -12,9 +12,11 @@ using System; using System.Collections; +using System.Drawing; using System.Net; using System.Net.Sockets; using System.Threading; +using System.Windows.Forms.VisualStyles; namespace oscardata { @@ -91,6 +93,13 @@ namespace oscardata { statics.ModemIP = RemoteEndpoint.Address.ToString(); searchtimeout = 0; + // message b contains audio devices + String s = statics.ByteArrayToString(b); + String[] sa1 = s.Split(new char[] { '^' }); + statics.AudioPBdevs = sa1[0].Split(new char[] { '~' }); + statics.AudioCAPdevs = sa1[1].Split(new char[] { '~' }); + if(statics.GotAudioDevices == 0) + statics.GotAudioDevices = 1; } // FFT data @@ -108,10 +117,11 @@ namespace oscardata lastb[0] = b[i]; // test if aligned + int re = 0, im = 0; if (lastb[0] == 0 && lastb[1] == 0 && lastb[2] == 3 && lastb[3] == 0xe8) { // we are aligned to a re value - int re = lastb[4]; + re = lastb[4]; re <<= 8; re += lastb[5]; re <<= 8; @@ -119,23 +129,18 @@ namespace oscardata re <<= 8; re += lastb[7]; - int im = lastb[8]; + im = lastb[8]; im <<= 8; im += lastb[9]; im <<= 8; im += lastb[10]; im <<= 8; im += lastb[11]; - - qpskitem q = new qpskitem(); - q.re = re; - q.im = im; - uq_iq.Add(q); } else if (lastb[0] == 0xe8 && lastb[1] == 3 && lastb[2] == 0 && lastb[3] == 0) { // we are aligned to a re value - int re = lastb[7]; + re = lastb[7]; re <<= 8; re += lastb[6]; re <<= 8; @@ -143,19 +148,16 @@ namespace oscardata re <<= 8; re += lastb[4]; - int im = lastb[11]; + im = lastb[11]; im <<= 8; im += lastb[10]; im <<= 8; im += lastb[9]; im <<= 8; im += lastb[8]; - - qpskitem q = new qpskitem(); - q.re = re; - q.im = im; - uq_iq.Add(q); } + + drawBitmap(re, im); } } } @@ -164,6 +166,39 @@ namespace oscardata } } + static int panelw = 75, panelh = 75; + static int maxdrawanz = 250; + static int drawanz = 0; + static Bitmap bm; + static void drawBitmap(int re, int im) + { + if (re == 0 && im == 0) return; + if (++drawanz >= maxdrawanz && uq_iq.Count() <= 1) + { + drawanz = 0; + uq_iq.Add(bm); + bm = new Bitmap(75, 75); + } + + using (Graphics gr = Graphics.FromImage(bm)) + { + // re and im are in the range of +/- 2^24 (16777216) + // scale it to +/- 128 + double fre = re; + double fim = im; + + fre = fre * panelw / 2 / 16777216.0; + fim = fim * panelh / 2 / 16777216.0; + + // scale it to the picture + int x = panelw / 2 + (int)fre; + int y = panelh / 2 + (int)fim; + + int et = 1; + gr.FillEllipse(Brushes.Blue, x - et, y - et, et * 2, et * 2); + } + } + static AutoResetEvent autoEvent = new AutoResetEvent(false); // Udp TX Loop runs in its own thread @@ -265,6 +300,19 @@ namespace oscardata return uq_iq.GetQPSKitem(); } + + public static Bitmap UdpBitmap() + { + if (uq_iq.Count() == 0) return null; + + return uq_iq.GetBitmap(); + } + + public static bool IQavail() + { + if (uq_iq.Count() == 0) return false; + return true; + } } // this class is a thread safe queue wich is used @@ -289,13 +337,32 @@ namespace oscardata } } - public Byte [] Getarr() + public void Add(Bitmap bm) + { + lock (myQ.SyncRoot) + { + myQ.Enqueue(bm); + } + } + + public Bitmap GetBitmap() + { + Bitmap b; + + lock (myQ.SyncRoot) + { + b = (Bitmap)myQ.Dequeue(); + } + return b; + } + + public Byte[] Getarr() { Byte[] b; lock (myQ.SyncRoot) { - b = (Byte [])myQ.Dequeue(); + b = (Byte[])myQ.Dequeue(); } return b; } diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s b/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s deleted file mode 100755 index 0b69026..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg b/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg deleted file mode 100755 index 391b6fb..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/icon.png b/oscardata/packages/MathNet.Numerics.4.12.0/icon.png deleted file mode 100755 index 7f46a40..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/icon.png and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll deleted file mode 100755 index d1539c1..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml deleted file mode 100755 index 5f9e8af..0000000 --- a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml +++ /dev/null @@ -1,57152 +0,0 @@ - - - - MathNet.Numerics - - - - - Useful extension methods for Arrays. - - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Enumerative Combinatorics and Counting. - - - - - Count the number of possible variations without repetition. - The order matters and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of distinct variations. - - - - Count the number of possible variations with repetition. - The order matters and each object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of distinct variations with repetition. - - - - Count the number of possible combinations without repetition. - The order does not matter and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of combinations. - - - - Count the number of possible combinations with repetition. - The order does not matter and an object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of combinations with repetition. - - - - Count the number of possible permutations (without repetition). - - Number of (distinguishable) elements in the set. - Maximum number of permutations without repetition. - - - - Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. - Implemented using Fisher-Yates Shuffling. - - An array of length N that contains (in any order) the integers of the interval [0, N). - Number of (distinguishable) elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation, without repetition, from a data array by reordering the provided array in-place. - Implemented using Fisher-Yates Shuffling. The provided data array will be modified. - - The data array to be reordered. The array will be modified by this routine. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation from a data sequence by returning the provided data in random order. - Implemented using Fisher-Yates Shuffling. - - The data elements to be reordered. - The random number generator to use. Optional; the default random source will be used if null. - - - - Generate a random combination, without repetition, by randomly selecting some of N elements. - - Number of elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Generate a random combination, without repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Select a random combination, without repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination, in the original order. - - - - Generates a random combination, with repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - Integer mask array of length N, for each item the number of times it was selected. - - - - Select a random combination, with repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination with repetition, in the original order. - - - - Generate a random variation, without repetition, by randomly selecting k of n elements with order. - Implemented using partial Fisher-Yates Shuffling. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. - Implemented using partial Fisher-Yates Shuffling. - - The data source to choose from. - Number of elements (k) to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation, in random order. - - - - Generate a random variation, with repetition, by randomly selecting k of n elements with order. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation with repetition, in random order. - - - - 32-bit single precision complex numbers class. - - - - The class Complex32 provides all elementary operations - on complex numbers. All the operators +, -, - *, /, ==, != are defined in the - canonical way. Additional complex trigonometric functions - are also provided. Note that the Complex32 structures - has two special constant values and - . - - - - Complex32 x = new Complex32(1f,2f); - Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); - Complex32 z = (x + y) / (x - y); - - - - For mathematical details about complex numbers, please - have a look at the - Wikipedia - - - - - - The real component of the complex number. - - - - - The imaginary component of the complex number. - - - - - Initializes a new instance of the Complex32 structure with the given real - and imaginary parts. - - The value for the real component. - The value for the imaginary component. - - - - Creates a complex number from a point's polar coordinates. - - A complex number. - The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. - The phase, which is the angle from the line to the horizontal axis, measured in radians. - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to one and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to one. - - - - - Returns a new instance - with real and imaginary numbers positive infinite. - - - - - Returns a new instance - with real and imaginary numbers not a number. - - - - - Gets the real component of the complex number. - - The real component of the complex number. - - - - Gets the real imaginary component of the complex number. - - The real imaginary component of the complex number. - - - - Gets the phase or argument of this Complex32. - - - Phase always returns a value bigger than negative Pi and - smaller or equal to Pi. If this Complex32 is zero, the Complex32 - is assumed to be positive real with an argument of zero. - - The phase or argument of this Complex32 - - - - Gets the magnitude (or absolute value) of a complex number. - - Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN - The magnitude of the current instance. - - - - Gets the squared magnitude (or squared absolute value) of a complex number. - - The squared magnitude of the current instance. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex32. - - - - Gets a value indicating whether the Complex32 is zero. - - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - - true if this instance is ; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - - true if this instance is real nonnegative number; otherwise, false. - - - - - Exponential of this Complex32 (exp(x), E^x). - - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex32 (Base E). - - The natural logarithm of this complex number. - - - - Common Logarithm of this Complex32 (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex32 with custom base. - - The logarithm of this complex number. - - - - Raise this Complex32 to the given value. - - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex32 to the inverse of the given value. - - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex32 - - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex32 - - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex32. - - - - - Evaluate all cubic roots of this Complex32. - - - - - Equality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real and imaginary components of the two complex numbers are equal; false otherwise. - - - - Inequality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real or imaginary components of the two complex numbers are not equal; false otherwise. - - - - Unary addition. - - The complex number to operate on. - Returns the same complex number. - - - - Unary minus. - - The complex number to operate on. - The negated value of the . - - - Addition operator. Adds two complex numbers together. - The result of the addition. - One of the complex numbers to add. - The other complex numbers to add. - - - Subtraction operator. Subtracts two complex numbers. - The result of the subtraction. - The complex number to subtract from. - The complex number to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The complex numbers to add. - The float value to add. - - - Subtraction operator. Subtracts float value from a complex value. - The result of the subtraction. - The complex number to subtract from. - The float value to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The float value to add. - The complex numbers to add. - - - Subtraction operator. Subtracts complex value from a float value. - The result of the subtraction. - The float vale to subtract from. - The complex value to subtract. - - - Multiplication operator. Multiplies two complex numbers. - The result of the multiplication. - One of the complex numbers to multiply. - The other complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The float value to multiply. - The complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The complex number to multiply. - The float value to multiply. - - - Division operator. Divides a complex number by another. - Enhanced Smith's algorithm for dividing two complex numbers - - The result of the division. - The dividend. - The divisor. - - - - Helper method for dividing. - - Re first - Im first - Re second - Im second - - - - - Division operator. Divides a float value by a complex number. - Algorithm based on Smith's algorithm - - The result of the division. - The dividend. - The divisor. - - - Division operator. Divides a complex number by a float value. - The result of the division. - The dividend. - The divisor. - - - - Computes the conjugate of a complex number and returns the result. - - - - - Returns the multiplicative inverse of a complex number. - - - - - Converts the value of the current complex number to its equivalent string representation in Cartesian form. - - The string representation of the current instance in Cartesian form. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format for its real and imaginary parts. - - The string representation of the current instance in Cartesian form. - A standard or custom numeric format string. - - is not a valid format string. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified culture-specific formatting information. - - The string representation of the current instance in Cartesian form, as specified by . - An object that supplies culture-specific formatting information. - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. - The string representation of the current instance in Cartesian form, as specified by and . - A standard or custom numeric format string. - An object that supplies culture-specific formatting information. - - is not a valid format string. - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - The hash code for the complex number. - - - The hash code of the complex number. - - - The hash code is calculated as - System.Math.Exp(ComplexMath.Absolute(complexNumber)). - - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as float. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Explicit conversion of a real decimal to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Explicit conversion of a Complex to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Implicit conversion of a real byte to a Complex32. - - The byte value to convert. - The result of the conversion. - - - - Implicit conversion of a real short to a Complex32. - - The short value to convert. - The result of the conversion. - - - - Implicit conversion of a signed byte to a Complex32. - - The signed byte value to convert. - The result of the conversion. - - - - Implicit conversion of a unsigned real short to a Complex32. - - The unsigned short value to convert. - The result of the conversion. - - - - Implicit conversion of a real int to a Complex32. - - The int value to convert. - The result of the conversion. - - - - Implicit conversion of a BigInteger int to a Complex32. - - The BigInteger value to convert. - The result of the conversion. - - - - Implicit conversion of a real long to a Complex32. - - The long value to convert. - The result of the conversion. - - - - Implicit conversion of a real uint to a Complex32. - - The uint value to convert. - The result of the conversion. - - - - Implicit conversion of a real ulong to a Complex32. - - The ulong value to convert. - The result of the conversion. - - - - Implicit conversion of a real float to a Complex32. - - The float value to convert. - The result of the conversion. - - - - Implicit conversion of a real double to a Complex32. - - The double value to convert. - The result of the conversion. - - - - Converts this Complex32 to a . - - A with the same values as this Complex32. - - - - Returns the additive inverse of a specified complex number. - - The result of the real and imaginary components of the value parameter multiplied by -1. - A complex number. - - - - Computes the conjugate of a complex number and returns the result. - - The conjugate of . - A complex number. - - - - Adds two complex numbers and returns the result. - - The sum of and . - The first complex number to add. - The second complex number to add. - - - - Subtracts one complex number from another and returns the result. - - The result of subtracting from . - The value to subtract from (the minuend). - The value to subtract (the subtrahend). - - - - Returns the product of two complex numbers. - - The product of the and parameters. - The first complex number to multiply. - The second complex number to multiply. - - - - Divides one complex number by another and returns the result. - - The quotient of the division. - The complex number to be divided. - The complex number to divide by. - - - - Returns the multiplicative inverse of a complex number. - - The reciprocal of . - A complex number. - - - - Returns the square root of a specified complex number. - - The square root of . - A complex number. - - - - Gets the absolute value (or magnitude) of a complex number. - - The absolute value of . - A complex number. - - - - Returns e raised to the power specified by a complex number. - - The number e raised to the power . - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a complex number. - - The complex number raised to the power . - A complex number to be raised to a power. - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a single-precision floating-point number. - - The complex number raised to the power . - A complex number to be raised to a power. - A single-precision floating-point number that specifies a power. - - - - Returns the natural (base e) logarithm of a specified complex number. - - The natural (base e) logarithm of . - A complex number. - - - - Returns the logarithm of a specified complex number in a specified base. - - The logarithm of in base . - A complex number. - The base of the logarithm. - - - - Returns the base-10 logarithm of a specified complex number. - - The base-10 logarithm of . - A complex number. - - - - Returns the sine of the specified complex number. - - The sine of . - A complex number. - - - - Returns the cosine of the specified complex number. - - The cosine of . - A complex number. - - - - Returns the tangent of the specified complex number. - - The tangent of . - A complex number. - - - - Returns the angle that is the arc sine of the specified complex number. - - The angle which is the arc sine of . - A complex number. - - - - Returns the angle that is the arc cosine of the specified complex number. - - The angle, measured in radians, which is the arc cosine of . - A complex number that represents a cosine. - - - - Returns the angle that is the arc tangent of the specified complex number. - - The angle that is the arc tangent of . - A complex number. - - - - Returns the hyperbolic sine of the specified complex number. - - The hyperbolic sine of . - A complex number. - - - - Returns the hyperbolic cosine of the specified complex number. - - The hyperbolic cosine of . - A complex number. - - - - Returns the hyperbolic tangent of the specified complex number. - - The hyperbolic tangent of . - A complex number. - - - - Extension methods for the Complex type provided by System.Numerics - - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex. - - - - Gets the conjugate of the Complex number. - - The number to perform this operation on. - - The semantic of setting the conjugate is such that - - // a, b of type Complex32 - a.Conjugate = b; - - is equivalent to - - // a, b of type Complex32 - a = b.Conjugate - - - The conjugate of the number. - - - - Returns the multiplicative inverse of a complex number. - - - - - Exponential of this Complex (exp(x), E^x). - - The number to perform this operation on. - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex (Base E). - - The number to perform this operation on. - - The natural logarithm of this complex number. - - - - - Common Logarithm of this Complex (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex with custom base. - - The logarithm of this complex number. - - - - Raise this Complex to the given value. - - The number to perform this operation on. - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex to the inverse of the given value. - - The number to perform this operation on. - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex - - The number to perform this operation on. - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex - - The number to perform this operation on. - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex. - - - - - Evaluate all cubic roots of this Complex. - - - - - Gets a value indicating whether the Complex32 is zero. - - The number to perform this operation on. - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - The number to perform this operation on. - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - The number to perform this operation on. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - The number to perform this operation on. - - true if this instance is NaN; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - The number to perform this operation on. - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - The number to perform this operation on. - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - The number to perform this operation on. - - true if this instance is real nonnegative number; otherwise, false. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - The string to parse. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as double. - - - - - Converts the string representation of a complex number to a double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - A collection of frequently used mathematical constants. - - - - The number e - - - The number log[2](e) - - - The number log[10](e) - - - The number log[e](2) - - - The number log[e](10) - - - The number log[e](pi) - - - The number log[e](2*pi)/2 - - - The number 1/e - - - The number sqrt(e) - - - The number sqrt(2) - - - The number sqrt(3) - - - The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 - - - The number sqrt(3)/2 - - - The number pi - - - The number pi*2 - - - The number pi/2 - - - The number pi*3/2 - - - The number pi/4 - - - The number sqrt(pi) - - - The number sqrt(2pi) - - - The number sqrt(pi/2) - - - The number sqrt(2*pi*e) - - - The number log(sqrt(2*pi)) - - - The number log(sqrt(2*pi*e)) - - - The number log(2 * sqrt(e / pi)) - - - The number 1/pi - - - The number 2/pi - - - The number 1/sqrt(pi) - - - The number 1/sqrt(2pi) - - - The number 2/sqrt(pi) - - - The number 2 * sqrt(e / pi) - - - The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). - - - - - The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). - - - - - The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). - - - The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. - - - The Catalan constant - Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } - - - The Euler-Mascheroni constant - lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } - - - The number (1+sqrt(5))/2, also known as the golden ratio - - - The Glaisher constant - e^(1/12 - Zeta(-1)) - - - The Khinchin constant - prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} - - - - The size of a double in bytes. - - - - - The size of an int in bytes. - - - - - The size of a float in bytes. - - - - - The size of a Complex in bytes. - - - - - The size of a Complex in bytes. - - - - Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) - - - Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) - - - Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) - - - Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) - - - Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) - - - Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) - - - Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) - - - Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) - - - Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) - - - Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) - - - Planck time: t_p = l_p/c_0 [s] (2007 CODATA) - - - Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) - - - Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) - - - Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) - - - Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) - - - Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) - - - Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) - - - Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) - - - Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) - - - Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) - - - Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) - - - Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) - - - Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) - - - Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) - - - Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) - - - Electron Mass: [kg] (2007 CODATA) - - - Electron Mass Energy Equivalent: [J] (2007 CODATA) - - - Electron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Electron Compton Wavelength: [m] (2007 CODATA) - - - Classical Electron Radius: [m] (2007 CODATA) - - - Thomson Cross Section: [m^2] (2002 CODATA) - - - Electron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Electon G-Factor: [1] (2007 CODATA) - - - Muon Mass: [kg] (2007 CODATA) - - - Muon Mass Energy Equivalent: [J] (2007 CODATA) - - - Muon Molar Mass: [kg mol^-1] (2007 CODATA) - - - Muon Compton Wavelength: [m] (2007 CODATA) - - - Muon Magnetic Moment: [J T^-1] (2007 CODATA) - - - Muon G-Factor: [1] (2007 CODATA) - - - Tau Mass: [kg] (2007 CODATA) - - - Tau Mass Energy Equivalent: [J] (2007 CODATA) - - - Tau Molar Mass: [kg mol^-1] (2007 CODATA) - - - Tau Compton Wavelength: [m] (2007 CODATA) - - - Proton Mass: [kg] (2007 CODATA) - - - Proton Mass Energy Equivalent: [J] (2007 CODATA) - - - Proton Molar Mass: [kg mol^-1] (2007 CODATA) - - - Proton Compton Wavelength: [m] (2007 CODATA) - - - Proton Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton G-Factor: [1] (2007 CODATA) - - - Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Neutron Mass: [kg] (2007 CODATA) - - - Neutron Mass Energy Equivalent: [J] (2007 CODATA) - - - Neutron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Neuron Compton Wavelength: [m] (2007 CODATA) - - - Neutron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Neutron G-Factor: [1] (2007 CODATA) - - - Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Deuteron Mass: [kg] (2007 CODATA) - - - Deuteron Mass Energy Equivalent: [J] (2007 CODATA) - - - Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Helion Mass: [kg] (2007 CODATA) - - - Helion Mass Energy Equivalent: [J] (2007 CODATA) - - - Helion Molar Mass: [kg mol^-1] (2007 CODATA) - - - Avogadro constant: [mol^-1] (2010 CODATA) - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 - - - The SI prefix factor corresponding to 1 000 - - - The SI prefix factor corresponding to 100 - - - The SI prefix factor corresponding to 10 - - - The SI prefix factor corresponding to 0.1 - - - The SI prefix factor corresponding to 0.01 - - - The SI prefix factor corresponding to 0.001 - - - The SI prefix factor corresponding to 0.000 001 - - - The SI prefix factor corresponding to 0.000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 - - - - Sets parameters for the library. - - - - - Use a specific provider if configured, e.g. using - environment variables, or fall back to the best providers. - - - - - Use the best provider available. - - - - - Use the Intel MKL native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Intel MKL native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the Nvidia CUDA native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Nvidia CUDA native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the OpenBLAS native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the OpenBLAS native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Try to use any available native provider in an undefined order. - - - True if one of the native providers was found and successfully initialized. - False if it failed and the previous provider is still active. - - - - - Gets or sets a value indicating whether the distribution classes check validate each parameter. - For the multivariate distributions this could involve an expensive matrix factorization. - The default setting of this property is true. - - - - - Gets or sets a value indicating whether to use thread safe random number generators (RNG). - Thread safe RNG about two and half time slower than non-thread safe RNG. - - - true to use thread safe random number generators ; otherwise, false. - - - - - Optional path to try to load native provider binaries from. - - - - - Gets or sets a value indicating how many parallel worker threads shall be used - when parallelization is applicable. - - Default to the number of processor cores, must be between 1 and 1024 (inclusive). - - - - Gets or sets the TaskScheduler used to schedule the worker tasks. - - - - - Gets or sets the order of the matrix when linear algebra provider - must calculate multiply in parallel threads. - - The order. Default 64, must be at least 3. - - - - Gets or sets the number of elements a vector or matrix - must contain before we multiply threads. - - Number of elements. Default 300, must be at least 3. - - - - Numerical Derivative. - - - - - Initialized a NumericalDerivative with the given points and center. - - - - - Initialized a NumericalDerivative with the default points and center for the given order. - - - - - Evaluates the derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - Derivative order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Univariate function handle. - Derivative order. - - - - Evaluates the first derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the first derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the second derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the second derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - - - - Evaluates the partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - - - - Class to calculate finite difference coefficients using Taylor series expansion method. - - - For n points, coefficients are calculated up to the maximum derivative order possible (n-1). - The current function value position specifies the "center" for surrounding coefficients. - Selecting the first, middle or last positions represent forward, backwards and central difference methods. - - - - - - - Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. - - - - - Initializes a new instance of the class. - - Number of finite difference coefficients. - - - - Gets the finite difference coefficients for a specified center and order. - - Current function position with respect to coefficients. Must be within point range. - Order of finite difference coefficients. - Vector of finite difference coefficients. - - - - Gets the finite difference coefficients for all orders at a specified center. - - Current function position with respect to coefficients. Must be within point range. - Rectangular array of coefficients, with columns specifying order. - - - - Type of finite different step size. - - - - - The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. - - - - - A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however - this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the - function input parameter and not the order of the finite difference derivative. - - - - - A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order - and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a - base step size, h, that is equivalent to scaling. This step size is then scaled according to the function - input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). - - - - - Class to evaluate the numerical derivative of a function using finite difference approximations. - Variable point and center methods can be initialized . - This class can also be used to return function handles (delegates) for a fixed derivative order and variable. - It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. - - - - - Initializes a NumericalDerivative class with the default 3 point center difference method. - - - - - Initialized a NumericalDerivative class. - - Number of points for finite difference derivatives. - Location of the center with respect to other points. Value ranges from zero to points-1. - - - - Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. - If the base step size used in scaling is desired, see . - - - Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a - base-2 representable number to improve finite difference accuracy. - - - - - Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. - However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. - - - - - Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. - By default this is set to machine epsilon, from which is computed. - - - - - Sets and gets the location of the center point for the finite difference derivative. - - - - - Number of times a function is evaluated for numerical derivatives. - - - - - Type of step size for computing finite differences. If set to absolute, dx = h. - If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when - h is approximately equal to the square-root of machine accuracy, epsilon. - - - - - Evaluates the derivative of equidistant points using the finite difference method. - - Vector of points StepSize apart. - Derivative order. - Finite difference step size. - Derivative of points of the specified order. - - - - Evaluates the derivative of a scalar univariate function. - - - Supplying the optional argument currentValue will reduce the number of function evaluations - required to calculate the finite difference derivative. - - Function handle. - Point at which to compute the derivative. - Derivative order. - Current function value at center. - Function derivative at x of the specified order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Input function handle. - Derivative order. - Function handle that evaluates the derivative of input function at a fixed order. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Function partial derivative at x of the specified order. - - - - Evaluates the partial derivatives of a multivariate function array. - - - This function assumes the input vector x is of the correct length for f. - - Multivariate vector function array handle. - Vector at which to evaluate the derivatives. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Vector of functions partial derivatives at x of the specified order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at a fixed order. - - - - Creates a function handle for the partial derivative of a vector multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at fixed order. - - - - Evaluates the mixed partial derivative of variable order for multivariate functions. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function handle. - Points at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivative at x of the specified order. - - - - Evaluates the mixed partial derivative of variable order for multivariate function arrays. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function array handle. - Vector at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivatives at x of the specified order. - - - - Creates a function handle for the mixed partial derivative of a multivariate function. - - Input function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Creates a function handle for the mixed partial derivative of a multivariate vector function. - - Input vector function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Resets the evaluation counter. - - - - - Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Hessian object with a three point central difference method. - - - - - Creates a numerical Hessian with a specified differentiation scheme. - - Number of points for Hessian evaluation. - Center point for differentiation. - - - - Evaluates the Hessian of the scalar univariate function f at points x. - - Scalar univariate function handle. - Point at which to evaluate Hessian. - Hessian tensor. - - - - Evaluates the Hessian of a multivariate function f at points x. - - - This method of computing the Hessian is only valid for Lipschitz continuous functions. - The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. - - Multivariate function handle.> - Points at which to evaluate Hessian.> - Hessian tensor. - - - - Resets the function evaluation counter for the Hessian. - - - - - Class for evaluating the Jacobian of a function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Jacobian object with a three point central difference method. - - - - - Creates a numerical Jacobian with a specified differentiation scheme. - - Number of points for Jacobian evaluation. - Center point for differentiation. - - - - Evaluates the Jacobian of scalar univariate function f at point x. - - Scalar univariate function handle. - Point at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x. - - - This function assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x given a current function value. - - - To minimize the number of function evaluations, a user can supply the current value of the function - to be used in computing the Jacobian. This value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Current function value at finite difference center. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function array f at vector x. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Jacobian matrix. - - - - Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. - - - To minimize the number of function evaluations, a user can supply a vector of current values of the functions - to be used in computing the Jacobian. These value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Vector of current function values. - Jacobian matrix. - - - - Resets the function evaluation counter for the Jacobian. - - - - - Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Double-Exponential integration. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The number of Gauss-Legendre points. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Gauss-Kronrod integration. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the differintegral of order n at x. - - - - Metrics to measure the distance between two structures. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Pearson's distance, i.e. 1 - the person correlation coefficient. - - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Discrete Univariate Bernoulli distribution. - The Bernoulli distribution is a distribution over bits. The parameter - p specifies the probability that a 1 is generated. - Wikipedia - Bernoulli distribution. - - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - If the Bernoulli parameter is not in the range [0,1]. - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - If the Bernoulli parameter is not in the range [0,1]. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Generates one sample from the Bernoulli distribution. - - The random source to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A random sample from the Bernoulli distribution. - - - - Samples a Bernoulli distributed random variable. - - A sample from the Bernoulli distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Continuous Univariate Beta distribution. - For details about this distribution, see - Wikipedia - Beta distribution. - - - There are a few special cases for the parameterization of the Beta distribution. When both - shape parameters are positive infinity, the Beta distribution degenerates to a point distribution - at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point - distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution - degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the - distribution degenerates to a point distribution at the non-zero shape parameter. - - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - A string representation of the Beta distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. - - - - - Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Beta distribution. - - - - - Gets the variance of the Beta distribution. - - - - - Gets the standard deviation of the Beta distribution. - - - - - Gets the entropy of the Beta distribution. - - - - - Gets the skewness of the Beta distribution. - - - - - Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the Beta distribution. - - - - - Gets the minimum of the Beta distribution. - - - - - Gets the maximum of the Beta distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Beta distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Beta distribution. - - a sequence of samples from the distribution. - - - - Samples Beta distributed random variables by sampling two Gamma variables and normalizing. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a random number from the Beta distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Beta-Binomial distribution. - The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising - when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. - The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. - It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. - Wikipedia - Beta-Binomial distribution. - - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a random number from the BetaBinomial distribution. - - - - Samples a BetaBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of BetaBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a BetaBinomial distributed random variable. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Samples an array of BetaBinomial distributed random variables. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - - - - Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast - is used to construct an underlying beta distribution. - - The minimum value. - The maximum value. - The most likely value (mode). - The random number generator which is used to draw random samples. - The Beta distribution derived from the PERT parameters. - - - - A string representation of the distribution. - - A string representation of the BetaScaled distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. - - - - - Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. - - - - - Gets the location (μ) of the BetaScaled distribution. - - - - - Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the BetaScaled distribution. - - - - - Gets the variance of the BetaScaled distribution. - - - - - Gets the standard deviation of the BetaScaled distribution. - - - - - Gets the entropy of the BetaScaled distribution. - - - - - Gets the skewness of the BetaScaled distribution. - - - - - Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the BetaScaled distribution. - - - - - Gets the minimum of the BetaScaled distribution. - - - - - Gets the maximum of the BetaScaled distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Binomial distribution. - For details about this distribution, see - Wikipedia - Binomial distribution. - - - The distribution is parameterized by a probability (between 0.0 and 1.0). - - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - If is not in the interval [0.0,1.0]. - If is negative. - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The random number generator which is used to draw random samples. - If is not in the interval [0.0,1.0]. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - - - - Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. - - - - - Gets the number of trials. Range: n ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the Binomial distribution without doing parameter checking. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successful trials. - - - - Samples a Binomially distributed random variable. - - The number of successes in N trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Binomially distributed random variables. - - a sequence of successes in N trials. - - - - Samples a binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Samples a binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Gets the scale (a) of the distribution. Range: a > 0. - - - - - Gets the first shape parameter (c) of the distribution. Range: c > 0. - - - - - Gets the second shape parameter (k) of the distribution. Range: k > 0. - - - - - Initializes a new instance of the Burr Type XII class. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Burr distribution. - - - - - Gets the variance of the Burr distribution. - - - - - Gets the standard deviation of the Burr distribution. - - - - - Gets the mode of the Burr distribution. - - - - - Gets the minimum of the Burr distribution. - - - - - Gets the maximum of the Burr distribution. - - - - - Gets the entropy of the Burr distribution (currently not supported). - - - - - Gets the skewness of the Burr distribution. - - - - - Gets the median of the Burr distribution. - - - - - Generates a sample from the Burr distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the Burr distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Discrete Univariate Categorical distribution. - For details about this distribution, see - Wikipedia - Categorical distribution. This - distribution is sometimes called the Discrete distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - Support: 0..k where k = length(probability mass array)-1 - - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class from a . The distribution - will not be automatically updated when the histogram changes. The categorical distribution will have - one value for each bucket and a probability for that value proportional to the bucket count. - - The histogram from which to create the categorical variable. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Gets the probability mass vector (non-negative ratios) of the multinomial. - - Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a . - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets he mode of the distribution. - - Throws a . - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the cumulative distribution function. This method performs no parameter checking. - If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - An array representing the unnormalized cumulative distribution function. - - - - Returns one trials from the categorical distribution. - - The random number generator to use. - The (unnormalized) cumulative distribution of the probability distribution. - One sample from the categorical distribution implied by . - - - - Samples a Binomially distributed random variable. - - The number of successful trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of successful trial counts. - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Continuous Univariate Cauchy distribution. - The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see - Wikipedia - Cauchy distribution. - - - - - Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 - - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Gets the location (x0) of the distribution. - - - - - Gets the scale (γ) of the distribution. Range: γ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi distribution. - This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal - components are independent and each follow a standard normal distribution. The length of the vector will - then have a chi distribution. - Wikipedia - Chi distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Chi distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Chi distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi-Squared distribution. - This distribution is a sum of the squares of k independent standard normal random variables. - Wikipedia - ChiSquare distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ChiSquare distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ChiSquare distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - Generates a sample from the ChiSquare distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sample from the ChiSquare distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Continuous Univariate Uniform distribution. - The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see - Wikipedia - Continuous uniform distribution. - - - - - Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. - - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - If the upper bound is smaller than the lower bound. - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - If the upper bound is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Generates a sample from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Discrete Univariate Conway-Maxwell-Poisson distribution. - The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli - distributions. It is parameterized by two real numbers "lambda" and "nu". For - - nu = 0 the distribution reverts to a Geometric distribution - nu = 1 the distribution reverts to the Poisson distribution - nu -> infinity the distribution converges to a Bernoulli distribution - - This implementation will cache the value of the normalization constant. - Wikipedia - ConwayMaxwellPoisson distribution. - - - - - The mean of the distribution. - - - - - The variance of the distribution. - - - - - Caches the value of the normalization constant. - - - - - Since many properties of the distribution can only be computed approximately, the tolerance - level specifies how much error we accept. - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Gets the lambda (λ) parameter. Range: λ > 0. - - - - - Gets the rate of decay (ν) parameter. Range: ν ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the cumulative distribution at location . - - - - - Gets the normalization constant of the Conway-Maxwell-Poisson distribution. - - - - - Computes an approximate normalization constant for the CMP distribution. - - The lambda (λ) parameter for the CMP distribution. - The rate of decay (ν) parameter for the CMP distribution. - - an approximate normalization constant for the CMP distribution. - - - - - Returns one trials from the distribution. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The z parameter. - - One sample from the distribution implied by , , and . - - - - - Samples a Conway-Maxwell-Poisson distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. - - - a sequence of samples from a Conway-Maxwell-Poisson distribution. - - - - - Samples a random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Multivariate Dirichlet distribution. For details about this distribution, see - Wikipedia - Dirichlet distribution. - - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - No parameter can be less than zero and at least one parameter should be larger than zero. - - The parameters of the Dirichlet distribution. - - - - Gets or sets the parameters of the Dirichlet distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the dimension of the Dirichlet distribution. - - - - - Gets the sum of the Dirichlet parameters. - - - - - Gets the mean of the Dirichlet distribution. - - - - - Gets the variance of the Dirichlet distribution. - - - - - Gets the entropy of the distribution. - - - - - Computes the density of the distribution. - - The locations at which to compute the density. - the density at . - The Dirichlet distribution requires that the sum of the components of x equals 1. - You can also leave out the last component, and it will be computed from the others. - - - - Computes the log density of the distribution. - - The locations at which to compute the density. - the density at . - - - - Samples a Dirichlet distributed random vector. - - A sample from this distribution. - - - - Samples a Dirichlet distributed random vector. - - The random number generator to use. - The Dirichlet distribution parameter. - a sample from the distribution. - - - - Discrete Univariate Uniform distribution. - The discrete uniform distribution is a distribution over integers. The distribution - is parameterized by a lower and upper bound (both inclusive). - Wikipedia - Discrete uniform distribution. - - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Gets the inclusive lower bound of the probability distribution. - - - - - Gets the inclusive upper bound of the probability distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. - - The random source to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A random sample from the discrete uniform distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of uniformly distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a uniformly distributed random variable. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Samples a uniformly distributed random variable. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Continuous Univariate Erlang distribution. - This distribution is a continuous probability distribution with wide applicability primarily due to its - relation to the exponential and Gamma distributions. - Wikipedia - Erlang distribution. - - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Erlang distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The scale (μ) of the Erlang distribution. Range: μ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. - - - - - Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - - Gets the scale of the Erlang distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum value. - - - - - Gets the Maximum value. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Erlang distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Erlang distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Exponential distribution. - The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. - Wikipedia - exponential distribution. - - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Exponential distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Draws a random sample from the distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. - For details about this distribution, see - Wikipedia - FisherSnedecor distribution. - - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. - - - - - Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the FisherSnedecor distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the FisherSnedecor distribution. - - a sequence of samples from the distribution. - - - - Generates one sample from the FisherSnedecor distribution without parameter checking. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a FisherSnedecor distributed random number. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Gamma distribution. - For details about this distribution, see - Wikipedia - Gamma distribution. - - - The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want - to specify a Gamma distribution which is a point distribution we set the shape parameter to be the - location of the point distribution and the inverse scale as positive infinity. The distribution - with shape and inverse scale both zero is undefined. - - Random number generation for the Gamma distribution is based on the algorithm in: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Gamma distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Gamma distribution. Range: k ≥ 0. - The scale (θ) of the Gamma distribution. Range: θ ≥ 0 - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. - - - - - Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - - Gets or sets the scale (θ) of the Gamma distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Gamma distribution. - - - - - Gets the variance of the Gamma distribution. - - - - - Gets the standard deviation of the Gamma distribution. - - - - - Gets the entropy of the Gamma distribution. - - - - - Gets the skewness of the Gamma distribution. - - - - - Gets the mode of the Gamma distribution. - - - - - Gets the median of the Gamma distribution. - - - - - Gets the minimum of the Gamma distribution. - - - - - Gets the maximum of the Gamma distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Gamma distribution. - - a sequence of samples from the distribution. - - - - Sampling implementation based on: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - This method performs no parameter checks. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - A sample from a Gamma distributed random variable. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Geometric distribution. - The Geometric distribution is a distribution over positive integers parameterized by one positive real number. - This implementation of the Geometric distribution will never generate 0's. - Wikipedia - geometric distribution. - - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a not supported exception. - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Returns one sample from the distribution. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - One sample from the distribution implied by . - - - - Samples a Geometric distributed random variable. - - A sample from the Geometric distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Geometric distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Discrete Univariate Hypergeometric distribution. - This distribution is a discrete probability distribution that describes the number of successes in a sequence - of n draws from a finite population without replacement, just as the binomial distribution - describes the number of successes for draws with replacement - Wikipedia - Hypergeometric distribution. - - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the size of the population (N). - - - - - Gets the number of draws without replacement (n). - - - - - Gets the number successes within the population (K, M). - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the cumulative distribution at location . - - - - - Generates a sample from the Hypergeometric distribution without doing parameter checking. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The n parameter of the distribution. - a random number from the Hypergeometric distribution. - - - - Samples a Hypergeometric distributed random variable. - - The number of successes in n trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Hypergeometric distributed random variables. - - a sequence of successes in n trials. - - - - Samples a random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Continuous Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by a double. - - - - - Gets the largest element in the domain of the distribution which can be represented by a double. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Discrete Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by an integer. - - - - - Gets the largest element in the domain of the distribution which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Probability Distribution. - - - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Continuous Univariate Inverse Gamma distribution. - The inverse Gamma distribution is a distribution over the positive real numbers parameterized by - two positive parameters. - Wikipedia - InverseGamma distribution. - - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Gets or sets the shape (α) parameter. Range: α > 0. - - - - - Gets or sets The scale (β) parameter. Range: β > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Throws . - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Gets the mean (μ) of the distribution. Range: μ > 0. - - - - - Gets the shape (λ) of the distribution. Range: λ > 0. - - - - - Initializes a new instance of the InverseGaussian class. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Inverse Gaussian distribution. - - - - - Gets the variance of the Inverse Gaussian distribution. - - - - - Gets the standard deviation of the Inverse Gaussian distribution. - - - - - Gets the median of the Inverse Gaussian distribution. - No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. - - - - - Gets the minimum of the Inverse Gaussian distribution. - - - - - Gets the maximum of the Inverse Gaussian distribution. - - - - - Gets the skewness of the Inverse Gaussian distribution. - - - - - Gets the kurtosis of the Inverse Gaussian distribution. - - - - - Gets the mode of the Inverse Gaussian distribution. - - - - - Gets the entropy of the Inverse Gaussian distribution (currently not supported). - - - - - Generates a sample from the inverse Gaussian distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the inverse Gaussian distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the inverse Gaussian distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - An Inverse Gaussian distribution. - - - - Multivariate Inverse Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution - is the conjugate prior for the covariance matrix of a multivariate normal distribution. - Wikipedia - Inverse-Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. - - - - - Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. - - - - Gets the variance of the distribution. - - The variance of the distribution. - Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. - - - - Evaluates the probability density function for the inverse Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - a sample from the distribution. - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - The random number generator to use. - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - a sample from the distribution. - - - - Univariate Probability Distribution. - - - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Continuous Univariate Laplace distribution. - The Laplace distribution is a distribution over the real numbers parameterized by a mean and - scale parameter. The PDF is: - p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. - Wikipedia - Laplace distribution. - - - - - Initializes a new instance of the class (location = 0, scale = 1). - - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - If is negative. - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - - - - Gets the location (μ) of the Laplace distribution. - - - - - Gets the scale (b) of the Laplace distribution. Range: b > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples a Laplace distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sample from the Laplace distribution. - - a sample from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Log-Normal distribution. - For details about this distribution, see - Wikipedia - Log-Normal distribution. - - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the logarithm of the distribution. - The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a log-normal distribution with the desired mu and sigma parameters. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Constructs a log-normal distribution with the desired mean and variance. - - The mean of the log-normal distribution. - The variance of the log-normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Estimates the log-normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - MATLAB: lognfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - - - - Gets the log-scale (μ) (mean of the logarithm) of the distribution. - - - - - Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mu of the log-normal distribution. - - - - - Gets the variance of the log-normal distribution. - - - - - Gets the standard deviation of the log-normal distribution. - - - - - Gets the entropy of the log-normal distribution. - - - - - Gets the skewness of the log-normal distribution. - - - - - Gets the mode of the log-normal distribution. - - - - - Gets the median of the log-normal distribution. - - - - - Gets the minimum of the log-normal distribution. - - - - - Gets the maximum of the log-normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the density at . - - MATLAB: lognpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: logncdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: logninv - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Multivariate Matrix-valued Normal distributions. The distribution - is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix - for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. - Wikipedia - MatrixNormal distribution. - - - - - The mean of the matrix normal distribution. - - - - - The covariance matrix for the rows. - - - - - The covariance matrix for the columns. - - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - The random number generator which is used to draw random samples. - If the dimensions of the mean and two covariance matrices don't match. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - - - - Gets the mean. (M) - - The mean of the distribution. - - - - Gets the row covariance. (V) - - The row covariance. - - - - Gets the column covariance. (K) - - The column covariance. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Evaluates the probability density function for the matrix normal distribution. - - The matrix at which to evaluate the density at. - the density at - If the argument does not have the correct dimensions. - - - - Samples a matrix normal distributed random variable. - - A random number from this distribution. - - - - Samples a matrix normal distributed random variable. - - The random number generator to use. - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - a sequence of samples from the distribution. - - - - Samples a vector normal distributed random variable. - - The random number generator to use. - The mean of the vector normal distribution. - The covariance matrix of the vector normal distribution. - a sequence of samples from defined distribution. - - - - Multivariate Multinomial distribution. For details about this distribution, see - Wikipedia - Multinomial distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - - - Stores the normalized multinomial probabilities. - - - - - The number of trials. - - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class from histogram . The distribution will - not be automatically updated when the histogram changes. - - Histogram instance - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative returns false, - if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. - - - - Gets the proportion of ratios. - - - - - Gets the number of trials. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Computes values of the probability mass function. - - Non-negative integers x1, ..., xk - The probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Computes values of the log probability mass function. - - Non-negative integers x1, ..., xk - The log probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Samples one multinomial distributed random variable. - - the counts for each of the different possible values. - - - - Samples a sequence multinomially distributed random variables. - - a sequence of counts for each of the different possible values. - - - - Samples one multinomial distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - the counts for each of the different possible values. - - - - Samples a multinomially distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of variables needed. - a sequence of counts for each of the different possible values. - - - - Discrete Univariate Negative Binomial distribution. - The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special - case that r is an integer one can interpret the distribution as the number of failures before the r'th success - when the probability of success is p. - Wikipedia - NegativeBinomial distribution. - - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Gets the number of successes. Range: r ≥ 0. - - - - - Gets the probability of success. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Samples a negative binomial distributed random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - a sample from the distribution. - - - - Samples a NegativeBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of NegativeBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Continuous Univariate Normal distribution, also known as Gaussian distribution. - For details about this distribution, see - Wikipedia - Normal distribution. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a normal distribution from a mean and standard deviation. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - a normal distribution. - - - - Constructs a normal distribution from a mean and variance. - - The mean (μ) of the normal distribution. - The variance (σ^2) of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Constructs a normal distribution from a mean and precision. - - The mean (μ) of the normal distribution. - The precision of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Estimates the normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - MATLAB: normfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Gets the mean (μ) of the normal distribution. - - - - - Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - - Gets the variance of the normal distribution. - - - - - Gets the precision of the normal distribution. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the entropy of the normal distribution. - - - - - Gets the skewness of the normal distribution. - - - - - Gets the mode of the normal distribution. - - - - - Gets the median of the normal distribution. - - - - - Gets the minimum of the normal distribution. - - - - - Gets the maximum of the normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the density at . - - MATLAB: normpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: normcdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: norminv - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - This structure represents the type over which the distribution - is defined. - - - - - Initializes a new instance of the struct. - - The mean of the pair. - The precision of the pair. - - - - Gets or sets the mean of the pair. - - - - - Gets or sets the precision of the pair. - - - - - Multivariate Normal-Gamma Distribution. - The distribution is the conjugate prior distribution for the - distribution. It specifies a prior over the mean and precision of the distribution. - It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the - precision inverse scale. - The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). - The following degenerate cases are special: when the precision is known, - the precision shape will encode the value of the precision while the precision inverse scale is positive - infinity. When the mean is known, the mean location will encode the value of the mean while the scale - will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. - Wikipedia - Normal-Gamma distribution. - - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Gets the location of the mean. - - - - - Gets the scale of the mean. - - - - - Gets the shape of the precision. - - - - - Gets the inverse scale of the precision. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Returns the marginal distribution for the mean of the NormalGamma distribution. - - the marginal distribution for the mean of the NormalGamma distribution. - - - - Returns the marginal distribution for the precision of the distribution. - - The marginal distribution for the precision of the distribution/ - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the variance of the distribution. - - The mean of the distribution. - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - Density value - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - Density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - The log of the density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - The log of the density value - - - - Generates a sample from the NormalGamma distribution. - - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - a sequence of samples from the distribution. - - - - Generates a sample from the NormalGamma distribution. - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sequence of samples from the distribution. - - - - Continuous Univariate Pareto distribution. - The Pareto distribution is a power law probability distribution that coincides with social, - scientific, geophysical, actuarial, and many other types of observable phenomena. - For details about this distribution, see - Wikipedia - Pareto distribution. - - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - If or are negative. - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The random number generator which is used to draw random samples. - If or are negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Pareto distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Poisson distribution. - - - Distribution is described at Wikipedia - Poisson distribution. - Knuth's method is used to generate Poisson distributed random variables. - f(x) = exp(-λ)*λ^x/x!; - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - If is equal or less then 0.0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - If is equal or less then 0.0. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - - - - Gets the Poisson distribution parameter λ. Range: λ > 0. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Generates one sample from the Poisson distribution. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by Knuth's method. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by "Rejection method PA". - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, - Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) - The article is on pages 29-35. The algorithm given here is on page 32. - - - - Samples a Poisson distributed random variable. - - A sample from the Poisson distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Poisson distributed random variables. - - a sequence of successes in N trials. - - - - Samples a Poisson distributed random variable. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Samples a Poisson distributed random variable. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Rayleigh distribution. - The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an - example of how it arises, the wind speed will have a Rayleigh distribution if the components of - the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. - For details about this distribution, see - Wikipedia - Rayleigh distribution. - - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - If is negative. - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the scale (σ) of the distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Rayleigh distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (σ) of the distribution. Range: σ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (σ) of the distribution. Range: σ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized Error Distribution (SGED). - Implements the univariate SSkewed Generalized Error Distribution. For details about this - distribution, see - - Wikipedia - Generalized Error Distribution. - It includes Laplace, Normal and Student-t distributions. - This is the distribution with q=Inf. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution - with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Generates a sample from the Skew Generalized Error distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized Error distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized T-distribution. - Implements the univariate Skewed Generalized t-distribution. For details about this - distribution, see - - Wikipedia - Skewed generalized t-distribution. - The skewed generalized t-distribution contains many different distributions within it - as special cases based on the parameterization chosen. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution - with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Given a parameter set, returns the distribution that matches this parameterization. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - Null if no known distribution matches the parameterization, else the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the inverse cumulative density at . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Skew Generalized t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Stable distribution. - A random variable is said to be stable (or to have a stable distribution) if it has - the property that a linear combination of two independent copies of the variable has - the same distribution, up to location and scale parameters. - For details about this distribution, see - Wikipedia - Stable distribution. - - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. - - - - - Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - - - - - Gets the scale (c) of the distribution. Range: c > 0. - - - - - Gets the location (μ) of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets he entropy of the distribution. - - Always throws a not supported exception. - - - - Gets the skewness of the distribution. - - Throws a not supported exception of Alpha != 2. - - - - Gets the mode of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the median of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) - - - - Samples the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a random number from the distribution. - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Stable distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Continuous Univariate Student's T-distribution. - Implements the univariate Student t-distribution. For details about this - distribution, see - - Wikipedia - Student's t-distribution. - - We use a slightly generalized version (compared to - Wikipedia) of the Student t-distribution. Namely, one which also - parameterizes the location and scale. See the book "Bayesian Data - Analysis" by Gelman et al. for more details. - The density of the Student t-distribution p(x|mu,scale,dof) = - Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / - (Gamma(dof/2)*Sqrt(dof*pi*scale)). - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. This might involve heavy - computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 - scale 1.0 and degrees of freedom 1. - - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Gets the location (μ) of the Student t-distribution. - - - - - Gets the scale (σ) of the Student t-distribution. Range: σ > 0. - - - - - Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Student t-distribution. - - - - - Gets the variance of the Student t-distribution. - - - - - Gets the standard deviation of the Student t-distribution. - - - - - Gets the entropy of the Student t-distribution. - - - - - Gets the skewness of the Student t-distribution. - - - - - Gets the mode of the Student t-distribution. - - - - - Gets the median of the Student t-distribution. - - - - - Gets the minimum of the Student t-distribution. - - - - - Gets the maximum of the Student t-distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Samples student-t distributed random variables. - - The algorithm is method 2 in section 5, chapter 9 - in L. Devroye's "Non-Uniform Random Variate Generation" - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a random number from the standard student-t distribution. - - - - Generates a sample from the Student t-distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Student t-distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Student t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Student t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Triangular distribution. - For details, see Wikipedia - Triangular distribution. - - The distribution will use the by default. - Users can get/set the random number generator by using the property. - The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The random number generator which is used to draw random samples. - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets or sets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Triangular distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Generates a sample from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Initializes a new instance of the TruncatedPareto class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The random number generator which is used to draw random samples. - If or are non-positive or if T ≤ xm. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets the truncation (T) of the distribution. Range: T > 0. - - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Gets the mean of the truncated Pareto distribution. - - - - - Gets the variance of the truncated Pareto distribution. - - - - - Gets the standard deviation of the truncated Pareto distribution. - - - - - Gets the mode of the truncated Pareto distribution (not supported). - - - - - Gets the minimum of the truncated Pareto distribution. - - - - - Gets the maximum of the truncated Pareto distribution. - - - - - Gets the entropy of the truncated Pareto distribution (not supported). - - - - - Gets the skewness of the truncated Pareto distribution. - - - - - Gets the median of the truncated Pareto distribution. - - - - - Generates a sample from the truncated Pareto distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Continuous Univariate Weibull distribution. - For details about this distribution, see - Wikipedia - Weibull distribution. - - - The Weibull distribution is parametrized by a shape and scale parameter. - - - - - Reusable intermediate result 1 / (_scale ^ _shape) - - - By caching this parameter we can get slightly better numerics precision - in certain constellations without any additional computations. - - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Gets the shape (k) of the Weibull distribution. Range: k > 0. - - - - - Gets the scale (λ) of the Weibull distribution. Range: λ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Weibull distribution. - - - - - Gets the variance of the Weibull distribution. - - - - - Gets the standard deviation of the Weibull distribution. - - - - - Gets the entropy of the Weibull distribution. - - - - - Gets the skewness of the Weibull distribution. - - - - - Gets the mode of the Weibull distribution. - - - - - Gets the median of the Weibull distribution. - - - - - Gets the minimum of the Weibull distribution. - - - - - Gets the maximum of the Weibull distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Generates a sample from the Weibull distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Weibull distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos - - - - Returns a Weibull distribution. - - - - Generates a sample from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Multivariate Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution - is the conjugate prior for the precision (inverse covariance) matrix of the multivariate - normal distribution. - Wikipedia - Wishart distribution. - - - - - The degrees of freedom for the Wishart distribution. - - - - - The scale matrix for the Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The random number generator which is used to draw random samples. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Gets or sets the degrees of freedom (n) for the Wishart distribution. - - - - - Gets or sets the scale matrix (V) for the Wishart distribution. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - - - - Gets the variance of the distribution. - - The variance of the distribution. - - - - Evaluates the probability density function for the Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - A random number from this distribution. - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The cholesky decomposition to use. - a random number from the distribution. - - - - Discrete Univariate Zipf distribution. - Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact - that many types of data studied in the physical and social sciences can be approximated with - a Zipfian distribution, one of a family of related discrete power law probability distributions. - For details about this distribution, see - Wikipedia - Zipf distribution. - - - - - The s parameter of the distribution. - - - - - The n parameter of the distribution. - - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Gets or sets the s parameter of the distribution. - - - - - Gets or sets the n parameter of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The s parameter of the distribution. - The n parameter of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the Zipf distribution without doing parameter checking. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - a random number from the Zipf distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of zipf distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Integer number theory functions. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Find out whether the provided 32 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 64 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 32 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 64 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 32 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 64 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Raises 2 to the provided integer exponent (0 <= exponent < 31). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Raises 2 to the provided integer exponent (0 <= exponent < 63). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Evaluate the binary logarithm of an integer number. - - Two-step method using a De Bruijn-like sequence table lookup. - - - - Find the closest perfect power of two that is larger or equal to the provided - 32 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Find the closest perfect power of two that is larger or equal to the provided - 64 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's - algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the greatest common divisor (gcd) of two big integers. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two big integers. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Collection of functions equivalent to those provided by Microsoft Excel - but backed instead by Math.NET Numerics. - We do not recommend to use them except in an intermediate phase when - porting over solutions previously implemented in Excel. - - - - - An algorithm failed to converge. - - - - - An algorithm failed to converge due to a numerical breakdown. - - - - - An error occurred calling native provider function. - - - - - An error occurred calling native provider function. - - - - - Native provider was unable to allocate sufficient memory. - - - - - Native provider failed LU inversion do to a singular U matrix. - - - - - Compound Monthly Return or Geometric Return or Annualized Return - - - - - Average Gain or Gain Mean - This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) - and then dividing the total by the number of gain periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Average Loss or LossMean - This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) - and then dividing the total by the number of loss periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain - and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. - © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. - - - - - Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then - measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. - - http://www.offshore-library.com/kb/statistics.php - - - - This measure is similar to the loss standard deviation except the downside deviation - considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. - For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below - 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for - the losing periods, and then measure the variation between each losing return and the losing return average). - - - - - A measure of volatility in returns below the mean. It's similar to standard deviation, but it only - looks at periods where the investment return was less than average return. - - - - - Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing - period. Periods can be monthly or quarterly depending on the data frequency. - - - - - Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - The missing gradient is evaluated numerically (forward difference). - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - - Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" - The roots of the polynomial - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The polynomial. - The roots of the polynomial - - - - Find all roots of the Chebychev polynomial of the first kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) - - - - Find all roots of the Chebychev polynomial of the second kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) - - - - Least-Squares Curve Fitting Routines - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as [a, b] array, - where a is the intercept and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning its best fitting parameters as (a, r) tuple. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning a function y' for the best fitting polynomial. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning a function y' for the best fitting combination. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning its best fitting parameter p. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning its best fitting parameter p0 and p1. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning its best fitting parameter p0, p1 and p2. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning a function y' for the best fitting curve. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate a linearly spaced sample vector of the given length between the specified values (inclusive). - Equivalent to MATLAB linspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). - - - - - Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). - Equivalent to MATLAB logspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - - - - - Create a periodic wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic wave. - - The number of samples to generate. - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a Sine wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite Sine wave sequence. - - Samples per unit. - Frequency in samples per unit. - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic square wave, starting with the high phase. - - The number of samples to generate. - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create an infinite periodic square wave sequence, starting with the high phase. - - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create a periodic triangle wave, starting with the raise phase from the lowest sample. - - The number of samples to generate. - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. - - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create a periodic sawtooth wave, starting with the lowest sample. - - The number of samples to generate. - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. - - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an array with each field set to the same value. - - The number of samples to generate. - The value that each field should be set to. - - - - Create an infinite sequence where each element has the same value. - - The value that each element should be set to. - - - - Create a Heaviside Step sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. - - - - Create an infinite Heaviside Step sample sequence. - - The maximal reached peak. - Offset to the time axis. - - - - Create a Kronecker Delta impulse sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - The maximal reached peak. - Offset to the time axis, hence the sample index of the impulse. - - - - Create a periodic Kronecker Delta impulse sample vector. - - The number of samples to generate. - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Generate samples generated by the given computation. - - - - - Generate an infinite sequence generated by the given computation. - - - - - Generate a Fibonacci sequence, including zero as first value. - - - - - Generate an infinite Fibonacci sequence, including zero as first value. - - - - - Create random samples, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create an infinite random sample sequence, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create samples with independent amplitudes of standard distribution. - - - - - Create an infinite sample sequence with independent amplitudes of standard distribution. - - - - - Create samples with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Generate samples by sampling a function at samples from a probability distribution. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution. - - - - - Globalized String Handling Helpers - - - - - Tries to get a from the format provider, - returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format - provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Globalized Parsing: Tokenize a node by splitting it into several nodes. - - Node that contains the trimmed string to be tokenized. - List of keywords to tokenize by. - keywords to skip looking for (because they've already been handled). - - - - Globalized Parsing: Parse a double number - - First token of the number. - Culture Info. - The parsed double number using the given culture information. - - - - - Globalized Parsing: Parse a float number - - First token of the number. - Culture Info. - The parsed float number using the given culture information. - - - - - Calculates r^2, the square of the sample correlation coefficient between - the observed outcomes and the observed predictor values. - Not to be confused with R^2, the coefficient of determination, see . - - The modelled/predicted values - The observed/actual values - Squared Person product-momentum correlation coefficient. - - - - Calculates r, the sample correlation coefficient between the observed outcomes - and the observed predictor values. - - The modelled/predicted values - The observed/actual values - Person product-momentum correlation coefficient. - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The Standard Error of the regression - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The degrees of freedom by which the - number of samples is reduced for performing the Standard Error calculation - The Standard Error of the regression - - - - Calculates the R-Squared value, also known as coefficient of determination, - given some modelled and observed values. - - The values expected from the model. - The actual values obtained. - Coefficient of determination. - - - - Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). - - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed from the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Generate the frequencies corresponding to each index in frequency space. - The frequency space has a resolution of sampleRate/N. - Index 0 corresponds to the DC part, the following indices correspond to - the positive frequencies up to the Nyquist frequency (sampleRate/2), - followed by the negative frequencies wrapped around. - - Number of samples. - The sampling rate of the time-space data. - - - - Fourier Transform Convention - - - - - Inverse integrand exponent (forward: positive sign; inverse: negative sign). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling and common exponent (used in Maple). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] - - - - - Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] - - - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - - - Naive forward DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Hartley Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DHT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Hartley Transform Convention Options. - Corresponding time-space vector. - - - - Rescale FFT-the resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Rescale the iFFT-resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Naive generic DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Corresponding frequency-space vector. - - - - Hartley Transform Convention - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling. - - - - - Numerical Integration (Quadrature). - - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Numerical Contour Integration of a complex-valued function over a real variable,. - - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Analytic integration algorithm for smooth functions with no discontinuities - or derivative discontinuities and no poles inside the interval. - - - - - Maximum number of iterations, until the asked - maximum error is (likely to be) satisfied. - - - - - Approximate the integral by the double exponential transformation - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximate the integral by the double exponential transformation - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Compute the abscissa vector for a single level. - - The level to evaluate the abscissa vector for. - Abscissa Vector. - - - - Compute the weight vector for a single level. - - The level to evaluate the weight vector for. - Weight Vector. - - - - Precomputed abscissa vector per level. - - - - - Precomputed weight vector per level. - - - - - Getter for the order. - - - - - Getter that returns a clone of the array containing the Kronrod abscissas. - - - - - Getter that returns a clone of the array containing the Kronrod weights. - - - - - Getter that returns a clone of the array containing the Gauss weights. - - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth function to integrate - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth complex function to integrate, defined on the real axis. - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - - Initializes a new instance of the class. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - Gettter for the ith abscissa. - - Index of the ith abscissa. - The ith abscissa. - - - - Getter that returns a clone of the array containing the abscissas. - - - - - Getter for the ith weight. - - Index of the ith weight. - The ith weight. - - - - Getter that returns a clone of the array containing the weights. - - - - - Getter for the order. - - - - - Getter for the InvervalBegin. - - - - - Getter for the InvervalEnd. - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth function to integrate. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - Contains a method to compute the Gauss-Kronrod abscissas/weights. - - - - - Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - - - Computes the Gauss-Kronrod abscissas/weights and Gauss weights. - - Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. - Object containing the non-negative abscissas/weights, order. - - - - Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. - - - - - Return value and derivative of a Legendre series at given points. - - - - - Return value and derivative of a Legendre polynomial of order at given points. - - - - - Creates a Gauss-Kronrod point. - - - - - Getter for the GaussKronrodPoint. - - Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, and order. - - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Computes the Gauss-Legendre abscissas/weights. - See Pavel Holoborodko for a description of the algorithm. - - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. 1e-10 is usually fine. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Creates and maps a Gauss-Legendre point. - - - - - Getter for the GaussPoint. - - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Getter for the GaussPoint. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Contains the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - - Contains two GaussPoint. - - - - - Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. - - - Wikipedia - Trapezium Rule - - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, define don real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation algorithm for definite integrals by Simpson's rule. - - - - - Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Even number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Interpolation Factory. - - - - - Creates an interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Bulirsch Stoer rational interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a barycentric polynomial interpolation where the given sample points are equidistant. - - The sample points t, must be equidistant. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted - instead, which is more efficient. - - - - - Create a Neville polynomial interpolation based on arbitrary points. - If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. - Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a piecewise linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted - instead, which is more efficient. - - - - - Create piecewise log-linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted - instead, which is more efficient. - - - - - Create an piecewise natural cubic spline interpolation based on arbitrary points, - with zero secondary derivatives at the boundaries. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted - instead, which is more efficient. - - - - - Create an piecewise cubic Akima spline interpolation based on arbitrary points. - Akima splines are robust to outliers. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted - instead, which is more efficient. - - - - - Create a piecewise cubic Hermite spline interpolation based on arbitrary points - and their slopes/first derivative. - - The sample points t. - The sample point values x(t). - The slope at the sample points. Optimized for arrays. - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted - instead, which is more efficient. - - - - - Create a step-interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Barycentric Interpolation Algorithm. - - Supports neither differentiation nor integration. - - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - Barycentric weights (N), sorted ascendingly by x. - - - - Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. - - - - - Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. - - - - - Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. - - - - This algorithm supports neither differentiation nor integration. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Cubic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - third order spline coefficients (N) - - - - Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - - - - - Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - Akima splines are robust to outliers. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - - - - - Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, - and custom boundary/termination conditions. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - - - - - Create a natural cubic spline interpolation from a set of (x,y) value pairs - and zero second derivatives at the two boundaries, sorted ascendingly by x. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - - - - - Three-Point Differentiation Helper. - - Sample Points t. - Sample Values x(t). - Index of the point of the differentiation. - Index of the first sample. - Index of the second sample. - Index of the third sample. - The derivative approximation. - - - - Tridiagonal Solve Helper. - - The a-vector[n]. - The b-vector[n], will be modified by this function. - The c-vector[n]. - The d-vector[n], will be modified by this function. - The x-vector[n] - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Interpolation within the range of a discrete set of known data points. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Piece-wise Linear Interpolation. - - Supports both differentiation and integration. - - - Sample points (N+1), sorted ascending - Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients - Slopes (N) at the sample points (first order coefficients): N - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Piece-wise Log-Linear Interpolation - - This algorithm supports differentiation, not integration. - - - - Internal Spline Interpolation - - - - Sample points (N), sorted ascending - Natural logarithm of the sample values (N) at the corresponding points - - - - Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Lagrange Polynomial Interpolation using Neville's Algorithm. - - - - This algorithm supports differentiation, but doesn't support integration. - - - When working with equidistant or Chebyshev sample points it is - recommended to use the barycentric algorithms specialized for - these cases instead of this arbitrary Neville algorithm. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Quadratic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Left and right boundary conditions. - - - - - Natural Boundary (Zero second derivative). - - - - - Parabolically Terminated boundary. - - - - - Fixed first derivative at the boundary. - - - - - Fixed second derivative at the boundary. - - - - - A step function where the start of each segment is included, and the last segment is open-ended. - Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. - The domain of the function is all real numbers, such that y = 0 where x <. - - Supports both differentiation and integration. - - - Sample points (N), sorted ascending - Samples values (N) of each segment starting at the corresponding sample point. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t. - - - - - Wraps an interpolation with a transformation of the interpolated values. - - Neither differentiation nor integration is supported. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use, - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a double dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - Matrix V is encoded in the property EigenVectors in the way that: - - column corresponding to real eigenvalue represents real eigenvector, - - columns corresponding to the pair of complex conjugate eigenvalues - lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - double version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - double version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiply this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply this one by. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a float dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - float version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a float sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - float version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the Frobenius norm of this matrix. - The Frobenius norm of this matrix. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex value z1 - Complex value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex32 value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex32 value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex32 value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex32 dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex32 value z1 - Complex32 value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - Supported data types are double, single, , and . - - - - Gets the lower triangular form of the Cholesky matrix. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - Supported data types are double, single, , and . - - - - Gets or sets a value indicating whether matrix is symmetric or not - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Gets or sets the eigen values (λ) of matrix in ascending value. - - - - - Gets or sets eigenvectors. - - - - - Gets or sets the block diagonal eigenvalue matrix. - - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - Supported data types are double, single, , and . - - - - Classes that solves a system of linear equations, AX = B. - - Supported data types are double, single, , and . - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, Ax = b - - The right hand side vector, b. - The left hand side Vector, x. - - - - Solves a system of linear equations, Ax = b. - - The right hand side vector, b. - The left hand side Matrix>, x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - Supported data types are double, single, , and . - - - - Gets the lower triangular factor. - - - - - Gets the upper triangular factor. - - - - - Gets the permutation applied to LU factorization. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - The type of QR factorization go perform. - - - - - Compute the full QR factorization of a matrix. - - - - - Compute the thin QR factorization of a matrix. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - Supported data types are double, single, , and . - - - - Gets or sets orthogonal Q matrix - - - - - Gets the upper triangular factor R. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - Supported data types are double, single, , and . - - - Indicating whether U and VT matrices have been computed during SVD factorization. - - - - Gets the singular values (Σ) of matrix in ascending value. - - - - - Gets the left singular vectors (U - m-by-m unitary matrix) - - - - - Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) - - - - - Returns the singular values as a diagonal . - - The singular values as a diagonal . - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - Supported data types are double, single, , and . - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - - - - The value of 1.0. - - - - - The value of 0.0. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar denominator to use. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar numerator to use. - The matrix to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. - - The exponent matrix to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Adds a scalar to each element of the matrix. - - The scalar to add. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds a scalar to each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix. - - The scalar to subtract. - A new matrix containing the subtraction of this matrix and the scalar. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts each element of the matrix from a scalar. - - The scalar to subtract from. - A new matrix containing the subtraction of the scalar and this matrix. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of this matrix with a scalar. - - The scalar to multiply with. - The result of the multiplication. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides each element of this matrix with a scalar. - - The scalar to divide with. - The result of the division. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides a scalar by each element of the matrix. - - The scalar to divide. - The result of the division. - - - - Divides a scalar by each element of the matrix and places results into the result matrix. - - The scalar to divide. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.ColumnCount != rightSide.Count. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.RowCount. - If this.ColumnCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ). - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.Rows. - If the result matrix's dimensions are not the this.Rows x other.Columns. - - - - Multiplies this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.Rows. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with the conjugate transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the conjugate transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Raises this square matrix to a positive integer exponent and places the results into the result matrix. - - The positive integer exponent to raise the matrix to. - The result of the power. - - - - Multiplies this square matrix with another matrix and returns the result. - - The positive integer exponent to raise the matrix to. - - - - Negate each element of this matrix. - - A matrix containing the negated values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - if the result matrix's dimensions are not the same as this matrix. - - - - Complex conjugate each element of this matrix. - - A matrix containing the conjugated values. - - - - Complex conjugate each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - if the result matrix's dimensions are not the same as this matrix. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Pointwise multiplies this matrix with another matrix. - - The matrix to pointwise multiply with this one. - If this matrix and are not the same size. - A new matrix that is the pointwise multiplication of this matrix and . - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise divide this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - A new matrix that is the pointwise division of this matrix and . - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise modulus. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise remainder. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Helper function to apply a unary function to a matrix. The function - f modifies the matrix given to it in place. Before its - called, a copy of the 'this' matrix is first created, then passed to - f. The copy is then returned as the result - - Function which takes a matrix, modifies it in place and returns void - New instance of matrix which is the result - - - - Helper function to apply a unary function which modifies a matrix - in place. - - Function which takes a matrix, modifies it in place and returns void - The matrix to be passed to f and where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two matrices - and modifies the latter in place. A copy of the "this" matrix is - first made and then passed to f together with the other matrix. The - copy is then returned as the result - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The resulting matrix - If this matrix and are not the same dimension. - - - - Helper function to apply a binary function which takes two matrices - and modifies the second one in place - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The matrix to store the result. - The resulting matrix - If this matrix and are not the same dimension. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - - - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - The other matrix 'y' - The matrix with the result and 'x' - - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Calculates the rank of the matrix. - - effective numerical rank, obtained from SVD - - - - Calculates the nullity of the matrix. - - effective numerical nullity, obtained from SVD - - - Calculates the condition number of this matrix. - The condition number of the matrix. - The condition number is calculated using singular value decomposition. - - - Computes the determinant of this matrix. - The determinant of this matrix. - - - - Computes an orthonormal basis for the null space of this matrix, - also known as the kernel of the corresponding matrix transformation. - - - - - Computes an orthonormal basis for the column space of this matrix, - also known as the range or image of the corresponding matrix transformation. - - - - Computes the inverse of this matrix. - The inverse of this matrix. - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. - In a later release, it will be replaced with a sparse implementation. - - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns a string that describes the type, dimensions and shape of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes this matrix. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Matrix class. - - - - - Gets the raw matrix data storage. - - - - - Gets the number of columns. - - The number of columns. - - - - Gets the number of rows. - - The number of rows. - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - - - - Sets the value of the given element without range checking. - - - The row of the element. - - - The column of the element. - - - The value to set the element to. - - - - - Sets all values to zero. - - - - - Sets all values of a row to zero. - - - - - Sets all values of a column to zero. - - - - - Sets all values for all of the chosen rows to zero. - - - - - Sets all values for all of the chosen columns to zero. - - - - - Sets all values of a sub-matrix to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Creates a clone of this instance. - - - A clone of the instance. - - - - - Copies the elements of this matrix to the given matrix. - - - The matrix to copy values into. - - - If target is . - - - If this and the target matrix do not have the same dimensions.. - - - - - Copies a row into an Vector. - - The row to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of rows. - - - - Copies a row into to the given Vector. - - The row to copy. - The Vector to copy the row into. - If the result vector is . - If is negative, - or greater than or equal to the number of rows. - If this.Columns != result.Count. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of rows. - is negative, - or greater than or equal to the number of columns. - (columnIndex + length) >= Columns. - If is not positive. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Copies a column into a new Vector>. - - The column to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of columns. - - - - Copies a column into to the given Vector. - - The column to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If this.Rows != result.Count. - - - - Copies the requested column elements into a new Vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of columns. - is negative, - or greater than or equal to the number of rows. - (rowIndex + length) >= Rows. - - If is not positive. - - - - Copies the requested column elements into the given vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Returns the elements of the diagonal in a Vector. - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a new matrix and inserts the given column at the given index. - - The index of where to insert the column. - The column to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of columns. - If the size of != the number of rows. - - - - Creates a new matrix with the given column removed. - - The index of the column to remove. - A new matrix without the chosen column. - If is < zero or >= the number of columns. - - - - Copies the values of the given Vector to the specified column. - - The column to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given Vector to the specified sub-column. - - The column to copy the values to. - The row to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given array to the specified column. - - The column to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - If the size of does not - equal the number of rows of this Matrix. - - - - Creates a new matrix and inserts the given row at the given index. - - The index of where to insert the row. - The row to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of rows. - If the size of != the number of columns. - - - - Creates a new matrix with the given row removed. - - The index of the row to remove. - A new matrix without the chosen row. - If is < zero or >= the number of rows. - - - - Copies the values of the given Vector to the specified row. - - The row to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given Vector to the specified sub-row. - - The row to copy the values to. - The column to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given array to the specified row. - - The row to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The column to start copying to. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The number of rows to copy. Must be positive. - The column to start copying to. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The row of the sub-matrix to start copying from. - The number of rows to copy. Must be positive. - The column to start copying to. - The column of the sub-matrix to start copying from. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of the given Vector to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Returns the transpose of this matrix. - - The transpose of this matrix. - - - - Puts the transpose of this matrix into the result matrix. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - - - - Concatenates this matrix with the given matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Concatenates this matrix with the given matrix and places the result into the result matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, - where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. - The values of off the off diagonal matrices/blocks are set to zero. - - The lower, right matrix. - If lower is . - the combined matrix - - - - - - Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. - - The lower, right matrix. - The combined matrix - If lower is . - If the result matrix is . - If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). - - - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Returns this matrix as a multidimensional array. - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - - A multidimensional containing the values of this matrix. - - - - Returns the matrix's elements as an array with the data laid out column by column (column major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the matrix's elements as an array with the data laid row by row (row major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns this matrix as array of row arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns this matrix as array of column arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the matrix will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowArrays instead if you always need an independent array. - - - - - Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnArrays instead if you always need an independent array. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. - - The column to start enumerating over. - The number of columns to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. - - The column to start enumerating over. - The number of columns to enumerating over. - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. - - The row to start enumerating over. - The number of rows to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. - - The row to start enumerating over. - The number of rows to enumerating over. - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Applies a function to each value of this matrix and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value with its result. - The row and column indices of each value (zero-based) are passed as first arguments to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each row. - - - - - For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each column. - - - - - Applies a function f to each row vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Applies a function f to each column vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Reduces all row vectors by applying a function between two of them, until only a single vector is left. - - - - - Reduces all column vectors by applying a function between two of them, until only a single vector is left. - - - - - Applies a function to each value pair of two matrices and replaces the value in the result vector. - - - - - Applies a function to each value pair of two matrices and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two matrices and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two matrices of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to add. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to add. - The right matrix to add. - The result of the addition. - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts a scalar from each element of a matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to subtract. - The scalar value to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts each element of a matrix from a scalar. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Divides a scalar with a matrix. - - The scalar to divide. - The matrix. - The result of the division. - If is . - - - - Divides a matrix with a scalar. - - The matrix to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of the matrix of the given divisor. - - The matrix whose elements we want to compute the modulus of. - The divisor to use. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the matrix. - - The dividend we want to compute the modulus of. - The matrix whose elements we want to use as divisor. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two matrices. - - The matrix whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a matrix pointwise - - The input matrix - - - - - Computes the exponential of a matrix pointwise - - The input matrix - - - - - Computes the log of a matrix pointwise - - The input matrix - - - - - Computes the log10 of a matrix pointwise - - The input matrix - - - - - Computes the sin of a matrix pointwise - - The input matrix - - - - - Computes the cos of a matrix pointwise - - The input matrix - - - - - Computes the tan of a matrix pointwise - - The input matrix - - - - - Computes the asin of a matrix pointwise - - The input matrix - - - - - Computes the acos of a matrix pointwise - - The input matrix - - - - - Computes the atan of a matrix pointwise - - The input matrix - - - - - Computes the sinh of a matrix pointwise - - The input matrix - - - - - Computes the cosh of a matrix pointwise - - The input matrix - - - - - Computes the tanh of a matrix pointwise - - The input matrix - - - - - Computes the absolute value of a matrix pointwise - - The input matrix - - - - - Computes the floor of a matrix pointwise - - The input matrix - - - - - Computes the ceiling of a matrix pointwise - - The input matrix - - - - - Computes the rounded value of a matrix pointwise - - The input matrix - - - - - Computes the Cholesky decomposition for a matrix. - - The Cholesky decomposition object. - - - - Computes the LU decomposition for a matrix. - - The LU decomposition object. - - - - Computes the QR decomposition for a matrix. - - The type of QR factorization to perform. - The QR decomposition object. - - - - Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. - - The QR decomposition object. - - - - Computes the SVD decomposition for a matrix. - - Compute the singular U and VT vectors or not. - The SVD decomposition object. - - - - Computes the EVD decomposition for a matrix. - - The EVD decomposition object. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The result matrix X. - - - - Converts a matrix to single precision. - - - - - Converts a matrix to double precision. - - - - - Converts a matrix to single precision complex numbers. - - - - - Converts a matrix to double precision complex numbers. - - - - - Gets a single precision complex matrix with the real parts from the given matrix. - - - - - Gets a double precision complex matrix with the real parts from the given matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Existing data may not be all zeros, so clearing may be necessary - if not all of it will be overwritten anyway. - - - - - If existing data is assumed to be all zeros already, - clearing it may be skipped if applicable. - - - - - Allow skipping zero entries (without enforcing skipping them). - When enumerating sparse matrices this can significantly speed up operations. - - - - - Force applying the operation to all fields even if they are zero. - - - - - It is not known yet whether a matrix is symmetric or not. - - - - - A matrix is symmetric - - - - - A matrix is Hermitian (conjugate symmetric). - - - - - A matrix is not symmetric - - - - - Defines an that uses a cancellation token as stop criterion. - - - - - Initializes a new instance of the class. - - - - - Initializes a new instance of the class. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Stop criterion that delegates the status determination to a delegate. - - - - - Create a new instance of this criterion with a custom implementation. - - Custom implementation with the same signature and semantics as the DetermineStatus method. - - - - Determines the status of the iterative calculation by delegating it to the provided delegate. - Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - - - - Clones this criterion and its settings. - - - - - Monitors an iterative calculation for signs of divergence. - - - - - The maximum relative increase the residual may experience without triggering a divergence warning. - - - - - The number of iterations over which a residual increase should be tracked before issuing a divergence warning. - - - - - The status of the calculation - - - - - The array that holds the tracking information. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified maximum - relative increase and the specified minimum number of tracking iterations. - - The maximum relative increase that the residual may experience before a divergence warning is issued. - The minimum number of iterations over which the residual must grow before a divergence warning is issued. - - - - Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. - - Thrown if the Maximum is set to zero or below. - - - - Gets or sets the minimum number of iterations over which the residual must grow before - issuing a divergence warning. - - Thrown if the value is set to less than one. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Detect if solution is diverging - - true if diverging, otherwise false - - - - Gets required history Length - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Defines an that monitors residuals for NaN's. - - - - - The status of the calculation - - - - - The iteration number of the last iteration. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - The base interface for classes that provide stop criteria for iterative calculations. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current IIterationStopCriterion. Status is set to Status field of current object. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - is not a legal value. Status should be set in implementation. - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - To implementers: Invoking this method should not clear the user defined - property values, only the state that is used to track the progress of the - calculation. - - - - Defines the interface for classes that solve the matrix equation Ax = b in - an iterative manner. - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Defines the interface for objects that can create an iterative solver with - specific settings. This interface is used to pass iterative solver creation - setup information around. - - - - - Gets the type of the solver that will be created by this setup object. - - - - - Gets type of preconditioner, if any, that will be created by this setup object. - - - - - Creates the iterative solver to be used. - - - - - Creates the preconditioner to be used by default (can be overwritten). - - - - - Gets the relative speed of the solver. - - Returns a value between 0 and 1, inclusive. - - - - Gets the relative reliability of the solver. - - Returns a value between 0 and 1 inclusive. - - - - The base interface for preconditioner classes. - - - - Preconditioners are used by iterative solvers to improve the convergence - speed of the solving process. Increase in convergence speed - is related to the number of iterations necessary to get a converged solution. - So while in general the use of a preconditioner means that the iterative - solver will perform fewer iterations it does not guarantee that the actual - solution time decreases given that some preconditioners can be expensive to - setup and run. - - - Note that in general changes to the matrix will invalidate the preconditioner - if the changes occur after creating the preconditioner. - - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix on which the preconditioner is based. - - - - Approximates the solution to the matrix equation Mx = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Defines an that monitors the numbers of iteration - steps as stop criterion. - - - - - The default value for the maximum number of iterations the process is allowed - to perform. - - - - - The maximum number of iterations the calculation is allowed to perform. - - - - - The status of the calculation - - - - - Initializes a new instance of the class with the default maximum - number of iterations. - - - - - Initializes a new instance of the class with the specified maximum - number of iterations. - - The maximum number of iterations the calculation is allowed to perform. - - - - Gets or sets the maximum number of iterations the calculation is allowed to perform. - - Thrown if the Maximum is set to a negative value. - - - - Returns the maximum number of iterations to the default. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Iterative Calculation Status - - - - - An iterator that is used to check if an iterative calculation should continue or stop. - - - - - The collection that holds all the stop criteria and the flag indicating if they should be added - to the child iterators. - - - - - The status of the iterator. - - - - - Initializes a new instance of the class with the default stop criteria. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Gets the current calculation status. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual iterators may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Indicates to the iterator that the iterative process has been cancelled. - - - Does not reset the stop-criteria. - - - - - Resets the to the pre-calculation state. - - - - - Creates a deep clone of the current iterator. - - The deep clone of the current iterator. - - - - Defines an that monitors residuals as stop criterion. - - - - - The maximum value for the residual below which the calculation is considered converged. - - - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - The status of the calculation - - - - - The number of iterations since the residuals got below the maximum. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified - maximum residual and minimum number of iterations. - - - The maximum value for the residual below which the calculation is considered converged. - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - Gets or sets the maximum value for the residual below which the calculation is considered - converged. - - Thrown if the Maximum is set to a negative value. - - - - Gets or sets the minimum number of iterations for which the residual has to be - below the maximum before the calculation is considered converged. - - Thrown if the BelowMaximumFor is set to a value less than 1. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Loads the available objects from the specified assembly. - - The assembly which will be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The type in the assembly which should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The of the assembly that should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - - - - A unit preconditioner. This preconditioner does not actually do anything - it is only used when running an without - a preconditioner. - - - - - The coefficient matrix on which this preconditioner operates. - Is used to check dimensions on the different vectors that are processed. - - - - - Initializes the preconditioner and loads the internal data structures. - - - The matrix upon which the preconditioner is based. - - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - If and do not have the same size. - - - - or - - - - If the size of is different the number of rows of the coefficient matrix. - - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Evaluate the row and column at a specific data index. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - - The array containing the row indices of the existing rows. Element "i" of the array gives the index of the - element in the array that is first non-zero element in a row "i". - The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always - given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. - - - - - An array containing the column indices of the non-zero values. Element "j" of the array - is the number of the column in matrix that contains the j-th value in the array. - - - - - Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values - array using the row-major storage mapping described in a compressed sparse row (CSR) format. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Delete value from internal storage - - Index of value in nonZeroValues array - Row number of matrix - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Find item Index in nonZeroValues array - - Matrix row index - Matrix column index - Item index - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Array that contains the indices of the non-zero values. - - - - - Array that contains the non-zero elements of the vector. - - - - - Gets the number of non-zero elements in the vector. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the vector storage format is dense. - - - - - Gets or sets the value at the given index, with range checking. - - - The index of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - The index of the element. - The requested element. - Not range-checked. - - - - Sets the element without range checking. - - The index of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - - Defines the generic class for Vector classes. - - Supported data types are double, single, , and . - - - - The zero value for type T. - - - - - The value of 1.0 for type T. - - - - - Negates vector and save result to - - Target vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar denominator to use. - The vector to store the result of the division. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar numerator to use. - The vector to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Adds a scalar to each element of the vector. - - The scalar to add. - A copy of the vector with the scalar added. - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - If this vector and are not the same size. - - - - Adds another vector to this vector. - - The vector to add to this one. - A new vector containing the sum of both vectors. - If this vector and are not the same size. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Subtracts a scalar from each element of the vector. - - The scalar to subtract. - A new vector containing the subtraction of this vector and the scalar. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Subtracts each element of the vector from a scalar. - - The scalar to subtract from. - A new vector containing the subtraction of the scalar and this vector. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Returns a negated vector. - - The negated vector. - Added as an alternative to the unary negation operator. - - - - Negates vector and save result to - - Target vector - - - - Subtracts another vector from this vector. - - The vector to subtract from this one. - A new vector containing the subtraction of the two vectors. - If this vector and are not the same size. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Return vector with complex conjugate values of the source vector - - Conjugated vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector. - - The scalar to multiply. - A new vector that is the multiplication of the vector and the scalar. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - If this vector and are not the same size. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - If is not of the same size. - - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - If is not of the same size. - If is . - - - - - Divides each element of the vector by a scalar. - - The scalar to divide with. - A new vector that is the division of the vector and the scalar. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar to divide with. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Divides a scalar by each element of the vector. - - The scalar to divide. - A new vector that is the division of the vector and the scalar. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector. - - The vector to pointwise multiply with this one. - A new vector which is the pointwise multiplication of the two vectors. - If this vector and are not the same size. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector. - - The pointwise denominator vector to use. - A new vector which is the pointwise division of the two vectors. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise division. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The matrix to store the result into. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - The vector to store the result into. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise modulus. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise remainder. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Helper function to apply a unary function to a vector. The function - f modifies the vector given to it in place. Before its - called, a copy of the 'this' vector with the same dimension is - first created, then passed to f. The copy is returned as the result - - Function which takes a vector, modifies it in place and returns void - New instance of vector which is the result - - - - Helper function to apply a unary function which modifies a vector - in place. - - Function which takes a vector, modifies it in place and returns void - The vector where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes a scalar and - a vector and modifies the latter in place. A copy of the "this" - vector is therefore first made and then passed to f together with - the scalar argument. The copy is then returned as the result - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The resulting vector - - - - Helper function to apply a binary function which takes a scalar and - a vector, modifies the latter in place and returns void. - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The vector where the result will be placed - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the latter in place. A copy of the "this" vector is - first made and then passed to f together with the other vector. The - copy is then returned as the result - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the second one in place - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - The vector to store the result - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. - - The other vector - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = (sum(abs(this[i])^p))^(1/p) - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - The p value. - This vector normalized to a unit vector with respect to the p-norm. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the value of maximum element. - - The value of maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the value of the minimum element. - - The value of the minimum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Computes the sum of the absolute value of the vector's elements. - - The sum of the absolute value of the vector's elements. - - - - Indicates whether the current object is equal to another object of the same type. - - An object to compare with this object. - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns an enumerator that iterates through the collection. - - - A that can be used to iterate through the collection. - - - - - Returns an enumerator that iterates through a collection. - - - An object that can be used to iterate through the collection. - - - - - Returns a string that describes the type, dimensions and shape of this vector. - - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Character to use to print if there is not enough space to print all entries. Typical value: "..". - Character to use to separate two columns on a line. Typical value: " " (2 spaces). - Character to use to separate two rows/lines. Typical value: Environment.NewLine. - Function to provide a string for any given entry value. - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that represents the content of this vector, column by column. - - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector, column by column and with a type header. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Vector class. - - - - - Gets the raw vector data storage. - - - - - Gets the length or number of dimensions of this vector. - - - - Gets or sets the value at the given . - The index of the value to get or set. - The value of the vector at the given . - If is negative or - greater than the size of the vector. - - - Gets the value at the given without range checking.. - The index of the value to get or set. - The value of the vector at the given . - - - Sets the at the given without range checking.. - The index of the value to get or set. - The value to set. - - - - Resets all values to zero. - - - - - Sets all values of a subvector to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Returns a deep-copy clone of the vector. - - A deep-copy clone of the vector. - - - - Set the values of this vector to the given values. - - The array containing the values to use. - If is . - If is not the same size as this vector. - - - - Copies the values of this vector into the target vector. - - The vector to copy elements into. - If is . - If is not the same size as this vector. - - - - Creates a vector containing specified elements. - - The first element to begin copying from. - The number of elements to copy. - A vector containing a copy of the specified elements. - If is not positive or - greater than or equal to the size of the vector. - If + is greater than or equal to the size of the vector. - - If is not positive. - - - - Copies the values of a given vector into a region in this vector. - - The field to start copying to - The number of fields to copy. Must be positive. - The sub-vector to copy from. - If is - - - - Copies the requested elements from this vector to another. - - The vector to copy the elements to. - The element to start copying from. - The element to start copying to. - The number of elements to copy. - - - - Returns the data contained in the vector as an array. - The returned array will be independent from this vector. - A new memory block will be allocated for the array. - - The vector's data as an array. - - - - Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the vector will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Create a matrix based on this vector in column form (one single column). - - - This vector as a column matrix. - - - - - Create a matrix based on this vector in row form (one single row). - - - This vector as a row matrix. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Applies a function to each value of this vector and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value with its result. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value pair of two vectors and replaces the value in the result vector. - - - - - Applies a function to each value pair of two vectors and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two vectors and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two vectors of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Vector containing the same values of . - - This method is included for completeness. - The vector to get the values from. - A vector containing the same values as . - If is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Adds a scalar to each element of a vector. - - The vector to add to. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of a vector. - - The scalar value to add. - The vector to add to. - The result of the addition. - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of a vector. - - The vector to subtract from. - The scalar value to subtract. - The result of the subtraction. - If is . - - - - Subtracts each element of a vector from a scalar. - - The scalar value to subtract from. - The vector to subtract. - The result of the subtraction. - If is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a scalar with a vector. - - The scalar to divide. - The vector. - The result of the division. - If is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Pointwise divides two Vectors. - - The vector to divide. - The other vector. - The result of the division. - If and are not the same size. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the vector. - - The dividend we want to compute the remainder of. - The vector whose elements we want to use as divisor. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two vectors. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a vector pointwise - - The input vector - - - - - Computes the exponential of a vector pointwise - - The input vector - - - - - Computes the log of a vector pointwise - - The input vector - - - - - Computes the log10 of a vector pointwise - - The input vector - - - - - Computes the sin of a vector pointwise - - The input vector - - - - - Computes the cos of a vector pointwise - - The input vector - - - - - Computes the tan of a vector pointwise - - The input vector - - - - - Computes the asin of a vector pointwise - - The input vector - - - - - Computes the acos of a vector pointwise - - The input vector - - - - - Computes the atan of a vector pointwise - - The input vector - - - - - Computes the sinh of a vector pointwise - - The input vector - - - - - Computes the cosh of a vector pointwise - - The input vector - - - - - Computes the tanh of a vector pointwise - - The input vector - - - - - Computes the absolute value of a vector pointwise - - The input vector - - - - - Computes the floor of a vector pointwise - - The input vector - - - - - Computes the ceiling of a vector pointwise - - The input vector - - - - - Computes the rounded value of a vector pointwise - - The input vector - - - - - Converts a vector to single precision. - - - - - Converts a vector to double precision. - - - - - Converts a vector to single precision complex numbers. - - - - - Converts a vector to double precision complex numbers. - - - - - Gets a single precision complex vector with the real parts from the given vector. - - - - - Gets a double precision complex vector with the real parts from the given vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response vector Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response matrix Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor-Response samples as tuples - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor-Response samples as tuples - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response matrix Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Weighted Linear Regression using normal equations. - - List of sample vectors (predictor) together with their response. - List of weights, one for each sample. - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Locally-Weighted Linear Regression using normal equations. - - - - - Locally-Weighted Linear Regression using normal equations. - - - - - First Order AB method(same as Forward Euler) - - Initial value - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Second Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Third Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Fourth Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - ODE Solver Algorithms - - - - - Second Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Second Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems - http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz - - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The lower bound - The upper bound - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems - - - - - Creates BFGS minimizer - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - - Creates a base class for BFGS minimization - - - - - Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima - See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm - Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp - - - - - Finds a minimum of a function by the BFGS quasi-Newton method - This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian - - An initial guess - Evaluates the function at a point - Evaluates the gradient of the function at a point - The minimum found - - - - Objective function with a frozen evaluation that must not be changed from the outside. - - - - Create a new unevaluated and independent copy of this objective function - - - - Objective function with a mutable evaluation. - - - - Create a new independent copy of this objective function, evaluated at the same point. - - - - Get the y-values of the observations. - - - - - Get the values of the weights for the observations. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the values of the parameters. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector. G = J'(y - f(x; p)) - - - - - Get the approximated Hessian matrix. H = J'J - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Get the degree of freedom. - - - - - The scale factor for initial mu - - - - - Non-linear least square fitting by the Levenberg-Marduardt algorithm. - - The objective function, including model, observations, and parameter bounds. - The initial guess values. - The initial damping parameter of mu. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for L2 norm of the residuals. - The max iterations. - The result of the Levenberg-Marquardt minimization - - - - Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm - - - - - - Creates L-BFGS minimizer - - Numbers of gradients and steps to store. - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe - Conditions are - i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) - ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) - where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. - - Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - references: - http://en.wikipedia.org/wiki/Wolfe_conditions - http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - - - Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - - - - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - The upper bound - - - - Creates a base class for minimization - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. - Called fminsearch() in Matlab. A description of the algorithm can be found at - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - or - https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method - - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Evaluate the objective function at each vertex to create a corresponding - list of error values for each vertex - - - - - - - - Check whether the points in the error profile have so little range that we - consider ourselves to have converged - - - - - - - - - Examine all error values to determine the ErrorProfile - - - - - - - Construct an initial simplex, given starting guesses for the constants, and - initial step sizes for each dimension - - - - - - - Test a scaling operation of the high point, and replace it if it is an improvement - - - - - - - - - - - Contract the simplex uniformly around the lowest point - - - - - - - - - Compute the centroid of all points except the worst - - - - - - - - The value of the constant - - - - - Returns the best fit parameters. - - - - - Returns the standard errors of the corresponding parameters - - - - - Returns the y-values of the fitted model that correspond to the independent values. - - - - - Returns the covariance matrix at minimizing point. - - - - - Returns the correlation matrix at minimizing point. - - - - - The stopping threshold for the function value or L2 norm of the residuals. - - - - - The stopping threshold for L2 norm of the change of the parameters. - - - - - The stopping threshold for infinity norm of the gradient. - - - - - The maximum number of iterations. - - - - - The lower bound of the parameters. - - - - - The upper bound of the parameters. - - - - - The scale factors for the parameters. - - - - - Objective function where neither Gradient nor Hessian is available. - - - - - Objective function where the Gradient is available. Greedy evaluation. - - - - - Objective function where the Gradient is available. Lazy evaluation. - - - - - Objective function where the Hessian is available. Greedy evaluation. - - - - - Objective function where the Hessian is available. Lazy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Greedy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Lazy evaluation. - - - - - Objective function where neither first nor second derivative is available. - - - - - Objective function where the first derivative is available. - - - - - Objective function where the first and second derivatives are available. - - - - - objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective function with a user supplied jacobian for nonlinear least squares regression. - - - - - Objective function for nonlinear least squares regression. - The numerical jacobian with accuracy order is used. - - - - - Adapts an objective function with only value implemented - to provide a gradient as well. Gradient calculation is - done using the finite difference method, specifically - forward differences. - - For each gradient computed, the algorithm requires an - additional number of function evaluations equal to the - functions's number of input parameters. - - - - - Set or get the values of the independent variable. - - - - - Set or get the values of the observations. - - - - - Set or get the values of the weights for the observations. - - - - - Get whether parameters are fixed or free. - - - - - Get the number of observations. - - - - - Get the number of unknown parameters. - - - - - Get the degree of freedom - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Set or get the values of the parameters. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector of x and p. - - - - - Get the Hessian matrix of x and p, J'WJ - - - - - Set observed data to fit. - - - - - Set parameters and bounds. - - The initial values of parameters. - The list to the parameters fix or free. - - - - Non-linear least square fitting by the trust region dogleg algorithm. - - - - - The trust region subproblem. - - - - - The stopping threshold for the trust region radius. - - - - - Non-linear least square fitting by the trust-region algorithm. - - The objective model, including function, jacobian, observations, and parameter bounds. - The subproblem - The initial guess values. - The stopping threshold for L2 norm of the residuals. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for trust region radius - The max iterations. - - - - - Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. - - - - - Class to represent a permutation for a subset of the natural numbers. - - - - - Entry _indices[i] represents the location to which i is permuted to. - - - - - Initializes a new instance of the Permutation class. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - - - - Gets the number of elements this permutation is over. - - - - - Computes where permutes too. - - The index to permute from. - The index which is permuted to. - - - - Computes the inverse of the permutation. - - The inverse of the permutation. - - - - Construct an array from a sequence of inversions. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - The set of inversions to construct the permutation from. - A permutation generated from a sequence of inversions. - - - - Construct a sequence of inversions from the permutation. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - A sequence of inversions. - - - - Checks whether the array represents a proper permutation. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - True if represents a proper permutation, false otherwise. - - - - A single-variable polynomial with real-valued coefficients and non-negative exponents. - - - - - The coefficients of the polynomial in a - - - - - Only needed for the ToString method - - - - - Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. - The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. - - - - - Create a zero-polynomial with a coefficient array of the given length. - An array of length N can support polynomials of a degree of at most N-1. - - Length of the coefficient array - - - - Create a zero-polynomial - - - - - Create a constant polynomial. - Example: 3.0 -> "p : x -> 3.0" - - The coefficient of the "x^0" monomial. - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as array - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as enumerable - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k - - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Calculates the complex roots of the Polynomial by eigenvalue decomposition - - a vector of complex numbers with the roots - - - - Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. - - Eigenvalue matrix A - This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix - - - - Addition of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a polynomial and a scalar. - - - - - Subtraction of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a scalar from a polynomial. - - - - - Addition of a polynomial from a scalar. - - - - - Negation of a polynomial. - - - - - Multiplies a polynomial by a polynomial (convolution) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Scales a polynomial by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Scales a polynomial by division by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r - - Left polynomial - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Point-wise division of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Point-wise multiplication of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Division of two polynomials returning the quotient-with-remainder of the two polynomials given - - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Addition of two Polynomials (piecewise) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Scalar value - Polynomial - Resulting Polynomial - - - - Subtraction of two polynomial. - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Subtracts a scalar from a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - Subtracts a polynomial from a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Negates a polynomial. - - Polynomial - Resulting Polynomial - - - - Multiplies a polynomial by a polynomial (convolution). - - Left polynomial - Right polynomial - resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Polynomial - Scalar value - Resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Divides a polynomial by scalar value. - - Polynomial - Scalar value - Resulting Polynomial - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Utilities for working with floating point numbers. - - - - Useful links: - - - http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic - - - http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon - - - - - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The relative accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Checks if a given double values is finite, i.e. neither NaN nor inifnity - - The value to be checked fo finitenes. - - - - The number of binary digits used to represent the binary number for a double precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - The number of binary digits used to represent the binary number for a single precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - On a standard machine this is equivalent to `DoublePrecision`. - - - - - Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - On a standard machine this is equivalent to `PositiveDoublePrecision`. - - - - - The number of significant decimal places of double-precision floating numbers (64 bit). - - - - - The number of significant decimal places of single-precision floating numbers (32 bit). - - - - - Value representing 10 * 2^(-53) = 1.11022302462516E-15 - - - - - Value representing 10 * 2^(-24) = 5.96046447753906E-07 - - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. - - The value. - The value of the number. - - - - Returns a 'directional' long value. This is a long value which acts the same as a double, - e.g. a negative double value will return a negative double value starting at 0 and going - more negative as the double value gets more negative. - - The input double value. - A long value which is roughly the equivalent of the double value. - - - - Returns a 'directional' int value. This is a int value which acts the same as a float, - e.g. a negative float value will return a negative int value starting at 0 and going - more negative as the float value gets more negative. - - The input float value. - An int value which is roughly the equivalent of the double value. - - - - Increments a floating point number to the next bigger number representable by the data type. - - The value which needs to be incremented. - How many times the number should be incremented. - - The incrementation step length depends on the provided value. - Increment(double.MaxValue) will return positive infinity. - - The next larger floating point value. - - - - Decrements a floating point number to the next smaller number representable by the data type. - - The value which should be decremented. - How many times the number should be decremented. - - The decrementation step length depends on the provided value. - Decrement(double.MinValue) will return negative infinity. - - The next smaller floating point value. - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The absolute threshold for to consider it as zero. - Zero if || is smaller than , otherwise. - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero. - - The real number to coerce to zero, if it is almost zero. - Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. - - - - Determines the range of floating point numbers that will match the specified value with the given tolerance. - - The value. - The ulps difference. - - Thrown if is smaller than zero. - - Tuple of the bottom and top range ends. - - - - Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is - always bigger than the value) - - The value. - The ulps difference. - The maximum floating point number which is larger than the given . - - - - Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is - always smaller than the value) - - The value. - The ulps difference. - The minimum floating point number which is smaller than the given . - - - - Determines the range of ulps that will match the specified value with the given tolerance. - - The value. - The relative difference. - - Thrown if is smaller than zero. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Tuple with the number of ULPS between the value and the value - relativeDifference as first, - and the number of ULPS between the value and the value + relativeDifference as second value. - - - - - Evaluates the count of numbers between two double numbers - - The first parameter. - The second parameter. - The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. - The number of floating point values between and . - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive double or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive float or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive double or NaN) - Evaluates the positive epsilon. See also - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive float or NaN) - Evaluates the positive epsilon. See also - - - - - Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - - Positive Machine epsilon - - - - Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - - Machine epsilon - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - Thrown if is smaller than zero. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - - - Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps - between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance - of 1 is passed then the result will be true only if the two numbers have the same binary representation - OR if they are two adjacent numbers that only differ by one step. - - - The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article - at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to - .NET enabled code without using pointers and unsafe code. - - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two vectors and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Support Interface for Precision Operations (like AlmostEquals). - - Type of the implementing class. - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - A norm of this value. - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - The value to compare with. - A norm of the difference between this and the other value. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - Revision - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - Frees the memory allocated to the MKL memory pool. - - - - - Frees the memory allocated to the MKL memory pool on the current thread. - - - - - Disable the MKL memory pool. May impact performance. - - - - - Retrieves information about the MKL memory pool. - - On output, returns the number of memory buffers allocated. - Returns the number of bytes allocated to all memory buffers. - - - - Enable gathering of peak memory statistics of the MKL memory pool. - - - - - Disable gathering of peak memory statistics of the MKL memory pool. - - - - - Measures peak memory usage of the MKL memory pool. - - Whether the usage counter should be reset. - The peak number of bytes allocated to all memory buffers. - - - - Disable gathering memory usage - - - - - Enable gathering memory usage - - - - - Return peak memory usage - - - - - Return peak memory usage and reset counter - - - - - Consistency vs. performance trade-off between runs on different machines. - - - - Consistent on the same CPU only (maximum performance) - - - Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) - - - Consistent on Intel CPUs supporting SSE2 or later - - - Consistent on Intel CPUs supporting SSE4.2 or later - - - Consistent on Intel CPUs supporting AVX or later - - - Consistent on Intel CPUs supporting AVX2 or later - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Helper class to load native libraries depending on the architecture of the OS and process. - - - - - Dictionary of handles to previously loaded libraries, - - - - - Gets a string indicating the architecture and bitness of the current process. - - - - - If the last native library failed to load then gets the corresponding exception - which occurred or null if the library was successfully loaded. - - - - - Load the native library with the given filename. - - The file name of the library to load. - Hint path where to look for the native binaries. Can be null. - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing its name and a directory. - Tries to load an implementation suitable for the current CPU architecture - and process mode if there is a matching subfolder. - - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing the full path including the file name of the library. - - True if the library was successfully loaded or if it has already been loaded. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsFFTProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsFFTProvider" environment variable, - or fall back to the best provider. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 - will cause k*k in the Bluestein sequence to overflow (GH-286). - - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Half rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Fully rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Radix-2 Reorder Helper Method - - Sample type - Sample vector - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - How to transpose a matrix. - - - - - Don't transpose a matrix. - - - - - Transpose a matrix. - - - - - Conjugate transpose a complex matrix. - - If a conjugate transpose is used with a real matrix, then the matrix is just transposed. - - - - Types of matrix norms. - - - - - The 1-norm. - - - - - The Frobenius norm. - - - - - The infinity norm. - - - - - The largest absolute value norm. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - Supported data types are Double, Single, Complex, and Complex32. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiply elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the full QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by QR factor. This is only used for the managed provider and can be - null for the native provider. The native provider uses the Q portion stored in the R matrix. - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - On entry the B matrix; on exit the X matrix. - The number of columns of B. - On exit, the solution matrix. - Rows must be greater or equal to columns. - The type of QR factorization to perform. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Gets or sets the linear algebra provider. - Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsLAProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsLAProvider" environment variable, - or fall back to the best provider. - - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - Sets the desired bit consistency on repeated identical computations on varying CPU architectures, - as a trade-off with performance. - - VML optimal precision and rounding. - VML accuracy mode. - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the MKL provider. - - - - - Unable to allocate memory. - - - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the native OpenBLAS provider. - - - - - Unable to allocate memory. - - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - Uses and uses the value of - to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - Uses the value of to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - Uses - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - if set to true , the class is thread safe. - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Random number generator using Mersenne Twister 19937 algorithm. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - Uses the value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - A 32-bit combined multiple recursive generator with 2 components of order 3. - - Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. - - - The type bases upon the implementation in the - Boost Random Number Library. - It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on - Wikipedia - Lagged Fibonacci generator. - - - - - Default value for the ShortLag - - - - - Default value for the LongLag - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The ShortLag value - TheLongLag value - - - - Gets the short lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Gets the long lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Stores an array of random numbers - - - - - Stores an index for the random number array element that will be accessed next. - - - - - Fills the array with new unsigned random numbers. - - - Generated random numbers are 32-bit unsigned integers greater than or equal to 0 - and less than or equal to . - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - This class implements extension methods for the System.Random class. The extension methods generate - pseudo-random distributed numbers for types other than double and int32. - - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random bytes. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers within the specified range. - - The random number generator. - The array to fill with random values. - Lower bound, inclusive. - Upper bound, exclusive. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative random number less than . - - The random number generator. - - A 64-bit signed integer greater than or equal to 0, and less than ; that is, - the range of return values includes 0 but not . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int32 range. - - The random number generator. - - A 32-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int64 range. - - The random number generator. - - A 64-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative decimal floating point random number less than 1.0. - - The random number generator. - - A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, - the range of return values includes 0.0 but not 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random boolean. - - The random number generator. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Provides a time-dependent seed value, matching the default behavior of System.Random. - WARNING: There is no randomness in this seed and quick repeated calls can cause - the same seed value. Do not use for cryptography! - - - - - Provides a seed based on time and unique GUIDs. - WARNING: There is only low randomness in this seed, but at least quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. - WARNING: There is only medium randomness in this seed, but quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Base class for random number generators. This class introduces a layer between - and the Math.Net Numerics random number generators to provide thread safety. - When used directly it use the System.Random as random number source. - - - - - Initializes a new instance of the class using - the value of to set whether - the instance is thread safe or not. - - - - - Initializes a new instance of the class. - - if set to true , the class is thread safe. - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The array to fill with random values. - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The size of the array to fill. - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than . - - - - - Returns a random number less then a specified maximum. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - A 32-bit signed integer less than . - is zero or negative. - - - - Returns a random number within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. - - is greater than . - - - - Fills an array with random 32-bit signed integers greater than or equal to zero and less than . - - The array to fill with random values. - - - - Returns an array with random 32-bit signed integers greater than or equal to zero and less than . - - The size of the array to fill. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . - - - - - Returns an infinite sequence of random numbers within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Fills the elements of a specified array of bytes with random numbers. - - An array of bytes to contain random numbers. - is null. - - - - Returns a random number between 0.0 and 1.0. - - A double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 1982 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: - An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 2006 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". - Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. - Xn = a * Xn−3 + c mod 2^32 - http://www.jstatsoft.org/v08/i14/paper - - - - - The default value for X1. - - - - - The default value for X2. - - - - - The default value for the multiplier. - - - - - The default value for the carry over. - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Seed or last but three unsigned random number. - - - - - Last but two unsigned random number. - - - - - Last but one unsigned random number. - - - - - The value of the carry over. - - - - - The multiplier. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Note: must be less than . - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Xoshiro256** pseudo random number generator. - A random number generator based on the class in the .NET library. - - - This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has - excellent(sub-ns) speed, a state space(256 bits) that is large enough - for any parallel application, and it passes all tests we are aware of. - - For generating just floating-point numbers, xoshiro256+ is even faster. - - The state must be seeded so that it is not everywhere zero.If you have - a 64-bit seed, we suggest to seed a splitmix64 generator and use its - output to fill s. - - For further details see: - David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". - https://arxiv.org/abs/1805.01407 - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Splitmix64 RNG. - - RNG state. This can take any value, including zero. - A new random UInt64. - - Splitmix64 produces equidistributed outputs, thus if a zero is generated then the - next zero will be after a further 2^64 outputs. - - - - - Bisection root-finding algorithm. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Algorithm by Brent, Van Wijngaarden, Dekker et al. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Helper method useful for preventing rounding errors. - a*sign(b) - - - - Algorithm by Broyden. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - Relative step size for calculating the Jacobian matrix at first step. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Helper method to calculate an approximation of the Jacobian. - - The function. - The argument (initial guess). - The result (of initial guess). - Relative step size for calculating the Jacobian. - - - - Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 - Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html - - - - - Q and R are transformed variables. - - - - - n^(1/3) - work around a negative double raised to (1/3) - - - - - Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false - - - Detect a range containing at least one root. - The function to detect roots from. - Lower value of the range. - Upper value of the range - The growing factor of research. Usually 1.6. - Maximum number of iterations. Usually 50. - True if the bracketing operation succeeded, false otherwise. - This iterative methods stops when two values with opposite signs are found. - - - - Sorting algorithms for single, tuple and triple lists. - - - - - Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. - - The type of elements in the key list. - List to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a range of a list of keys, in place using the quick sort algorithm. - - The type of element in the list. - List to sort. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the primary list. - The type of elements in the secondary list. - List to sort. - List to sort on duplicate primary items, and permute the same way as the key list. - Comparison, defining the primary sort order. - Comparison, defining the secondary sort order. - - - - Recursive implementation for an in place quick sort on a list. - - The type of the list on which the quick sort is performed. - The list which is sorted using quick sort. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. - - The type of the list on which the quick sort is performed. - The type of the list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. - - The type of the list on which the quick sort is performed. - The type of the first list which is automatically reordered accordingly. - The type of the second list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The first list which is automatically reordered accordingly. - The second list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. - - The type of the primary list. - The type of the secondary list. - The list which is sorted using quick sort. - The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. - The method with which to compare two elements of the primary list. - The method with which to compare two elements of the secondary list. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Performs an in place swap of two elements in a list. - - The type of elements stored in the list. - The list in which the elements are stored. - The index of the first element of the swap. - The index of the second element of the swap. - - - - This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the error function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. - - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of Airy function Ai - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of the Airy function Ai. - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Ai. - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi(z). - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Computes the logarithm of the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The logarithm of the Euler Beta function evaluated at z,w. - If or are not positive. - - - - Computes the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The Euler Beta function evaluated at z,w. - If or are not positive. - - - - Returns the lower incomplete (unregularized) beta function - B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The lower incomplete (unregularized) beta function. - - - - Returns the regularized lower incomplete beta function - I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The regularized lower incomplete beta function. - - - - ************************************** - COEFFICIENTS FOR METHOD ErfImp * - ************************************** - - Polynomial coefficients for a numerator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a denominator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - - ************************************** - COEFFICIENTS FOR METHOD ErfInvImp * - ************************************** - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Calculates the error function. - The value to evaluate. - the error function evaluated at given value. - - - returns 1 if x == double.PositiveInfinity. - returns -1 if x == double.NegativeInfinity. - - - - - Calculates the complementary error function. - The value to evaluate. - the complementary error function evaluated at given value. - - - returns 0 if x == double.PositiveInfinity. - returns 2 if x == double.NegativeInfinity. - - - - - Calculates the inverse error function evaluated at z. - The inverse error function evaluated at given value. - - - returns double.PositiveInfinity if z >= 1.0. - returns double.NegativeInfinity if z <= -1.0. - - - Calculates the inverse error function evaluated at z. - value to evaluate. - the inverse error function evaluated at Z. - - - - Implementation of the error function. - - Where to evaluate the error function. - Whether to compute 1 - the error function. - the error function. - - - Calculates the complementary inverse error function evaluated at z. - The complementary inverse error function evaluated at given value. - We have tested this implementation against the arbitrary precision mpmath library - and found cases where we can only guarantee 9 significant figures correct. - - returns double.PositiveInfinity if z <= 0.0. - returns double.NegativeInfinity if z >= 2.0. - - - calculates the complementary inverse error function evaluated at z. - value to evaluate. - the complementary inverse error function evaluated at Z. - - - - The implementation of the inverse error function. - - First intermediate parameter. - Second intermediate parameter. - Third intermediate parameter. - the inverse error function. - - - - Computes the generalized Exponential Integral function (En). - - The argument of the Exponential Integral function. - Integer power of the denominator term. Generalization index. - The value of the Exponential Integral function. - - This implementation of the computation of the Exponential Integral function follows the derivation in - "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by - Dover Publications, New York), Chapters 6, 7, and 26. - AND - "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 - - - for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. - for 0 < x <= 1 uses Taylor series expansion - - Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. - - - - - Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up - to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. - - A value value! for value > 0 - - If you need to multiply or divide various such factorials, consider using the logarithmic version - instead so you can add instead of multiply and subtract instead of divide, and - then exponentiate the result using . This will also circumvent the problem that - factorials become very large even for small parameters. - - - - - - Computes the factorial of an integer. - - - - - Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. - - A value value! for value > 0 - - - - Computes the binomial coefficient: n choose k. - - A nonnegative value n. - A nonnegative value h. - The binomial coefficient: n choose k. - - - - Computes the natural logarithm of the binomial coefficient: ln(n choose k). - - A nonnegative value n. - A nonnegative value h. - The logarithmic binomial coefficient: ln(n choose k). - - - - Computes the multinomial coefficient: n choose n1, n2, n3, ... - - A nonnegative value n. - An array of nonnegative values that sum to . - The multinomial coefficient. - if is . - If or any of the are negative. - If the sum of all is not equal to . - - - - The order of the approximation. - - - - - Auxiliary variable when evaluating the function. - - - - - Polynomial coefficients for the approximation. - - - - - Computes the logarithm of the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. - - - - - Computes the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - - Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. - - - - - Returns the upper incomplete regularized gamma function - Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete regularized gamma function. - - - - Returns the upper incomplete gamma function - Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete gamma function. - - - - Returns the lower incomplete gamma function - gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the lower incomplete regularized gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the inverse P^(-1) of the regularized lower incomplete gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, - such that P^(-1)(a,P(a,x)) == x. - - - - - Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. - This implementation is based on - Jose Bernardo - Algorithm AS 103: - Psi ( Digamma ) Function, - Applied Statistics, - Volume 25, Number 3, 1976, pages 315-317. - Using the modifications as in Tom Minka's lightspeed toolbox. - - The argument of the digamma function. - The value of the DiGamma function at . - - - - Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will - only return solutions that are positive. - This implementation is based on the bisection method. - - The argument of the inverse digamma function. - The positive solution to the inverse DiGamma function at . - - - - Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Rising Factorial for x and n - - - - Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Falling Factorial for x and n - - - - A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. - This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation - see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function - - The list of coefficients in the numerator - The list of coefficients in the denominator - The variable in the power series - The value of the Generalized HyperGeometric Function. - - - - Returns the Hankel function of the first kind. - HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the first kind. - - - - Returns the exponentially scaled Hankel function of the first kind. - ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the first kind. - - - - Returns the Hankel function of the second kind. - HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the second kind. - - - - Returns the exponentially scaled Hankel function of the second kind. - ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the second kind. - - - - Computes the 'th Harmonic number. - - The Harmonic number which needs to be computed. - The t'th Harmonic number. - - - - Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) - - The order parameter. - The power parameter. - General Harmonic number. - - - - Returns the Kelvin function of the first kind. - KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function of the first kind. - - - - Returns the Kelvin function ber. - KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function ber. - KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(x) is equivalent to KelvinBer(0, x). - - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function bei. - KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the Kelvin function bei. - KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBei(x) is equivalent to KelvinBei(0, x). - - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the derivative of the Kelvin function ber. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function ber - - - - Returns the derivative of the Kelvin function ber. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ber. - - - - Returns the derivative of the Kelvin function bei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function bei. - - - - Returns the derivative of the Kelvin function bei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function bei. - - - - Returns the Kelvin function of the second kind - KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). - KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - The order of the Kelvin function. - The value to calculate the kelvin function of, - - - - - Returns the Kelvin function ker. - KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function ker. - KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKer(x) is equivalent to KelvinKer(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function kei. - KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the Kelvin function kei. - KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKei(x) is equivalent to KelvinKei(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the derivative of the Kelvin function ker. - - The order of the Kelvin function. - The non-negative real value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function ker. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function kei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Returns the derivative of the Kelvin function kei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic - - The parameter for which to compute the logistic function. - The logistic function of . - - - - Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit - - The parameter for which to compute the logit function. This number should be - between 0 and 1. - The logarithm of divided by 1.0 - . - - - - ************************************** - COEFFICIENTS FOR METHODS bessi0 * - ************************************** - - Chebyshev coefficients for exp(-x) I0(x) - in the interval [0, 8]. - - lim(x->0){ exp(-x) I0(x) } = 1. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I0(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessi1 * - ************************************** - - Chebyshev coefficients for exp(-x) I1(x) / x - in the interval [0, 8]. - - lim(x->0){ exp(-x) I1(x) / x } = 1/2. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I1(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk0, bessk0e * - ************************************** - - Chebyshev coefficients for K0(x) + log(x/2) I0(x) - in the interval [0, 2]. The odd order coefficients are all - zero; only the even order coefficients are listed. - - lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. - - - - Chebyshev coefficients for exp(x) sqrt(x) K0(x) - in the inverted interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk1, bessk1e * - ************************************** - - Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) - in the interval [0, 2]. - - lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. - - - - Chebyshev coefficients for exp(x) sqrt(x) K1(x) - in the interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). - - - - Returns the modified Bessel function of first kind, order 0 of the argument. -

- The function is defined as i0(x) = j0( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of first kind, - order 1 of the argument. -

- The function is defined as i1(x) = -i j1( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of the second kind - of order 0 of the argument. -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 0 of the argument. - - The value to compute the Bessel function of. - - - - Returns the modified Bessel function of the second kind - of order 1 of the argument. -

- The range is partitioned into the two intervals [0, 2] and - (2, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 1 of the argument. -

- k1e(x) = exp(x) * k1(x). -

- The value to compute the Bessel function of. - -
- - - Returns the modified Struve function of order 0. - - The value to compute the function of. - - - - Returns the modified Struve function of order 1. - - The value to compute the function of. - - - - Returns the difference between the Bessel I0 and Struve L0 functions. - - The value to compute the function of. - - - - Returns the difference between the Bessel I1 and Struve L1 functions. - - The value to compute the function of. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Numerically stable exponential minus one, i.e. x -> exp(x)-1 - - A number specifying a power. - Returns exp(power)-1. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Evaluation functions, useful for function approximation. - - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Numerically stable series summation - - provides the summands sequentially - Sum - - - Evaluates the series of Chebyshev polynomials Ti at argument x/2. - The series is given by -
-                  N-1
-                   - '
-            y  =   >   coef[i] T (x/2)
-                   -            i
-                  i=0
-            
- Coefficients are stored in reverse order, i.e. the zero - order term is last in the array. Note N is the number of - coefficients, not the order. -

- If coefficients are for the interval a to b, x must - have been transformed to x -> 2(2x - b - a)/(b-a) before - entering the routine. This maps x from (a, b) to (-1, 1), - over which the Chebyshev polynomials are defined. -

- If the coefficients are for the inverted interval, in - which (a, b) is mapped to (1/b, 1/a), the transformation - required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, - this becomes x -> 4a/x - 1. -

- SPEED: -

- Taking advantage of the recurrence properties of the - Chebyshev polynomials, the routine requires one more - addition per loop than evaluating a nested polynomial of - the same degree. -

- The coefficients of the polynomial. - Argument to the polynomial. - - Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs -

- Marked as Deprecated in - http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html - - - -

- Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. - - The no. of terms in the sequence. - The coefficients of the Chebyshev series, length n+1. - The value at which the series is to be evaluated. - - ORIGINAL AUTHOR: - Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND - REFERENCES: - "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" - J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 - -
- - - Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. - This function has a global minimum at (1,1) with f(1,1) = 0. - Common range: [-5,10] or [-2.048,2.048]. - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Valley-shaped Rosenbrock function for 2 or more dimensions. - This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 - This function has 4 global minima with f(x,y) = 0. - Common range: [-6,6]. - Named after David Mautner Himmelblau - - - https://en.wikipedia.org/wiki/Himmelblau%27s_function - - - - - Rastrigin, a highly multi-modal function with many local minima. - Global minimum of all zeros with f(0) = 0. - Common range: [-5.12,5.12]. - - - https://en.wikipedia.org/wiki/Rastrigin_function - http://www.sfu.ca/~ssurjano/rastr.html - - - - - Drop-Wave, a multi-modal and highly complex function with many local minima. - Global minimum of all zeros with f(0) = -1. - Common range: [-5.12,5.12]. - - - http://www.sfu.ca/~ssurjano/drop.html - - - - - Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. - Global minimum of all zeros with f(0) = 0. - Common range: [-32.768, 32.768]. - - - http://www.sfu.ca/~ssurjano/ackley.html - - - - - Bowl-shaped first Bohachevsky function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-100, 100] - - - http://www.sfu.ca/~ssurjano/boha.html - - - - - Plate-shaped Matyas function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-10, 10]. - - - http://www.sfu.ca/~ssurjano/matya.html - - - - - Valley-shaped six-hump camel back function. - Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). - Common range: x in [-3,3], y in [-2,2]. - - - http://www.sfu.ca/~ssurjano/camel6.html - - - - - Statistics operating on arrays assumed to be unsorted. - WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. - - - - - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - A class with correlation measures between two datasets. - - - - - Auto-correlation function (ACF) based on FFT for all possible lags k. - - Data array to calculate auto correlation for. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. - - The data array to calculate auto correlation for. - Max lag to calculate ACF for must be positive and smaller than x.Length. - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function based on FFT for lags k. - - The data array to calculate auto correlation for. - Array with lags to calculate ACF for. - An array with the ACF as a function of the lags k. - - - - The internal method for calculating the auto-correlation. - - The data array to calculate auto-correlation for - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length - Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length - An array with the ACF as a function of the lags k. - - - - Computes the Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - The Pearson product-moment correlation coefficient. - - - - Computes the Weighted Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - Corresponding weights of data. - The Weighted Pearson product-moment correlation coefficient. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Array of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Enumerable of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Spearman Ranked Correlation coefficient. - - Sample data series A. - Sample data series B. - The Spearman ranked correlation coefficient. - - - - Computes the Spearman Ranked Correlation matrix. - - Array of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the Spearman Ranked Correlation matrix. - - Enumerable of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the basic statistics of data set. The class meets the - NIST standard of accuracy for mean, variance, and standard deviation - (the only statistics they provide exact values for) and exceeds them - in increased accuracy mode. - Recommendation: consider to use RunningStatistics instead. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Gets the size of the sample. - - The size of the sample. - - - - Gets the sample mean. - - The sample mean. - - - - Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). - - The sample variance. - - - - Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). - - The sample standard deviation. - - - - Gets the sample skewness. - - The sample skewness. - Returns zero if is less than three. - - - - Gets the sample kurtosis. - - The sample kurtosis. - Returns zero if is less than four. - - - - Gets the maximum sample value. - - The maximum sample value. - - - - Gets the minimum sample value. - - The minimum sample value. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Internal use. Method use for setting the statistics. - - For setting Mean. - For setting Variance. - For setting Skewness. - For setting Kurtosis. - For setting Minimum. - For setting Maximum. - For setting Count. - - - - A consists of a series of s, - each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - This IComparer performs comparisons between a point and a bucket. - - - - - Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. - - The first bucket to compare. - The second bucket to compare. - -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. - - - - Lower Bound of the Bucket. - - - - - Upper Bound of the Bucket. - - - - - The number of datapoints in the bucket. - - - Value may be NaN if this was constructed as a argument. - - - - - Initializes a new instance of the Bucket class. - - - - - Constructs a Bucket that can be used as an argument for a - like when performing a Binary search. - - Value to look for - - - - Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. - - A cloned Bucket object. - - - - Width of the Bucket. - - - - - True if this is a single point argument for - when performing a Binary search. - - - - - Default comparer. - - - - - This method check whether a point is contained within this bucket. - - The point to check. - - 0 if the point falls within the bucket boundaries; - -1 if the point is smaller than the bucket, - +1 if the point is larger than the bucket. - - - - Comparison of two disjoint buckets. The buckets cannot be overlapping. - - - 0 if UpperBound and LowerBound are bit-for-bit equal - 1 if This bucket is lower that the compared bucket - -1 otherwise - - - - - Checks whether two Buckets are equal. - - - UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a - difference in Count given by . - - - - - Provides a hash code for this bucket. - - - - - Formats a human-readable string for this bucket. - - - - - A class which computes histograms of data. - - - - - Contains all the Buckets of the Histogram. - - - - - Indicates whether the elements of buckets are currently sorted. - - - - - Initializes a new instance of the Histogram class. - - - - - Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram - will be set to the smallest and largest datapoint. - - The data sequence to build a histogram on. - The number of buckets to use. - - - - Constructs a Histogram with a specific number of equally sized buckets. - - The data sequence to build a histogram on. - The number of buckets to use. - The histogram lower bound. - The histogram upper bound. - - - - Add one data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The datapoint which we want to add. - - - - Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The sequence of datapoints which we want to add. - - - - Adds a Bucket to the Histogram. - - - - - Sort the buckets if needed. - - - - - Returns the Bucket that contains the value v. - - The point to search the bucket for. - A copy of the bucket containing point . - - - - Returns the index in the Histogram of the Bucket - that contains the value v. - - The point to search the bucket index for. - The index of the bucket containing the point. - - - - Returns the lower bound of the histogram. - - - - - Returns the upper bound of the histogram. - - - - - Gets the n'th bucket. - - The index of the bucket to be returned. - A copy of the n'th bucket. - - - - Gets the number of buckets. - - - - - Gets the total number of datapoints in the histogram. - - - - - Prints the buckets contained in the . - - - - - Kernel density estimation (KDE). - - - - - Estimate the probability density function of a random variable. - - - The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. - - - - - Estimate the probability density function of a random variable with a Gaussian kernel. - - - - - Estimate the probability density function of a random variable with an Epanechnikov kernel. - The Epanechnikov kernel is optimal in a mean square error sense. - - - - - Estimate the probability density function of a random variable with a uniform kernel. - - - - - Estimate the probability density function of a random variable with a triangular kernel. - - - - - A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). - This kernel is the default. - - - - - Epanechnikov Kernel: - x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 - - - - - Uniform Kernel: - x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 - - - - - Triangular Kernel: - x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 - - - - - A hybrid Monte Carlo sampler for multivariate distributions. - - - - - Number of parameters in the density function. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of different components of the - momentum. - - - - - Gets or sets the standard deviations used in the sampling of different components of the - momentum. - - When the length of pSdv is not the same as Length. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - 1 using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the a random number generator provided by the user. - A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviations - given by pSdv. This constructor will set the burn interval, the method used for - numerical differentiation and the random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - The method used for numerical differentiation. - When the number of burnInterval iteration is negative. - When the length of pSdv is not the same as x0. - - - - Initialize parameters. - - The current location of the sampler. - - - - Checking that the location and the momentum are of the same dimension and that each component is positive. - - The standard deviations used for sampling the momentum. - When the length of pSdv is not the same as Length or if any - component is negative. - When pSdv is null. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the gradient. Uses a simple three point estimation. - - Function which the gradient is to be evaluated. - The location where the gradient is to be evaluated. - The gradient of the function at the point x. - - - - The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set - of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as - a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used - to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler - (). - - The type of samples this sampler produces. - - - - The delegate type that defines a derivative evaluated at a certain point. - - Function to be differentiated. - Value where the derivative is computed. - - - - Evaluates the energy function of the target distribution. - - - - - The current location of the sampler. - - - - - The number of burn iterations between two samples. - - - - - The size of each step in the Hamiltonian equation. - - - - - The number of iterations in the Hamiltonian equation. - - - - - The algorithm used for differentiation. - - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the number of iterations in the Hamiltonian equation. - - When frog leap steps is negative or zero. - - - - Gets or sets the size of each step in the Hamiltonian equation. - - When step size is negative or zero. - - - - Constructs a new Hybrid Monte Carlo sampler. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - Random number generator used for sampling the momentum. - The method used for differentiation. - When the number of burnInterval iteration is negative. - When either x0, pdfLnP or diff is null. - - - - Returns a sample from the distribution P. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Method used to update the sample location. Used in the end of the loop. - - The old energy. - The old gradient/derivative of the energy. - The new sample. - The new gradient/derivative of the energy. - The new energy. - The difference between the old Hamiltonian and new Hamiltonian. Use to determine - if an update should take place. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Method for doing dot product. - - First vector/scalar in the product. - Second vector/scalar in the product. - - - - Method for adding, multiply the second vector/scalar by factor and then - add it to the first vector/scalar. - - First vector/scalar. - Scalar factor multiplying by the second vector/scalar. - Second vector/scalar. - - - - Multiplying the second vector/scalar by factor and then subtract it from - the first vector/scalar. - - First vector/scalar. - Scalar factor to be multiplied to the second vector/scalar. - Second vector/scalar. - - - - Method for sampling a random momentum. - - Momentum to be randomized. - - - - The Hamiltonian equations that is used to produce the new sample. - - - - - Method to compute the Hamiltonian used in the method. - - The momentum. - The energy. - Hamiltonian=E+p.p/2 - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than or equal to zero. - Throws when value is negative. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than to zero. - Throws when value is negative or zero. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than zero. - Throws when value is negative or zero. - - - - Provides utilities to analysis the convergence of a set of samples from - a . - - - - - Computes the auto correlations of a series evaluated by a function f. - - The series for computing the auto correlation. - The lag in the series - The function used to evaluate the series. - The auto correlation. - Throws if lag is zero or if lag is - greater than or equal to the length of Series. - - - - Computes the effective size of the sample when evaluated by a function f. - - The samples. - The function use for evaluating the series. - The effective size when auto correlation is taken into account. - - - - A method which samples datapoints from a proposal distribution. The implementation of this sampler - is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it doesn't take any parameters; it samples random - variables from the whole domain. - - The type of the datapoints. - A sample from the proposal distribution. - - - - A method which samples datapoints from a proposal distribution given an initial sample. The implementation - of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it samples locally around an initial point. In other words, it - makes a small local move rather than producing a global sample from the proposal. - - The type of the datapoints. - The initial sample. - A sample from the proposal distribution. - - - - A function which evaluates a density. - - The type of data the distribution is over. - The sample we want to evaluate the density for. - - - - A function which evaluates a log density. - - The type of data the distribution is over. - The sample we want to evaluate the log density for. - - - - A function which evaluates the log of a transition kernel probability. - - The type for the space over which this transition kernel is defined. - The new state in the transition. - The previous state in the transition. - The log probability of the transition. - - - - The interface which every sampler must implement. - - The type of samples this sampler produces. - - - - The random number generator for this class. - - - - - Keeps track of the number of accepted samples. - - - - - Keeps track of the number of calls to the proposal sampler. - - - - - Initializes a new instance of the class. - - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Gets or sets the random number generator. - - When the random number generator is null. - - - - Returns one sample. - - - - - Returns a number of samples. - - The number of samples we want. - An array of samples. - - - - Gets the acceptance rate of the sampler. - - - - - Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the - proposal distribution Q is symmetric in comparison to . It does need to - be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. - - The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the target distribution. - - - - - Evaluates the log transition probability for the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis-Hastings sampler using the default random number generator. This - constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - The log transition probability for the proposal distribution. - A method that samples from the proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal - distribution Q is symmetric. All densities are required to be in log space. - - The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the sampling distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis sampler using the default random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - A method that samples from the symmetric proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to - to be normalized, but we do need that for each x, P(x) < Q(x). - - The type of samples this sampler produces. - - - - Evaluates the density function of the sampling distribution. - - - - - Evaluates the density function of the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - Constructs a new rejection sampler using the default random number generator. - - The density of the distribution we want to sample from. - The density of the proposal distribution. - A method that samples from the proposal distribution. - - - - Returns a sample from the distribution P. - - When the algorithms detects that the proposal - distribution doesn't upper bound the target distribution. - - - - A hybrid Monte Carlo sampler for univariate distributions. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of the - momentum. - - - - - Gets or sets the standard deviation used in the sampling of the - momentum. - - When standard deviation is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using a random - number generator provided by the user. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - Random number generator used to sample the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - given by pSdv using a random - number generator provided by the user. This constructor will set both the burn interval and the method used for - numerical differentiation. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - The method used for numerical differentiation. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the derivative. Uses a simple three point estimation. - - Function for which the derivative is to be evaluated. - The location where the derivative is to be evaluated. - The derivative of the function at the point x. - - - - Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using - a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. - - The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - - - - Evaluates the log density function of the target distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - The scale of the slice sampler. - - - - - Constructs a new Slice sampler using the default random - number generator. The burn interval will be set to 0. - - The initial sample. - The density of the distribution we want to sample from. - The scale factor of the slice sampler. - When the scale of the slice sampler is not positive. - - - - Constructs a new slice sampler using the default random number generator. It - will set the number of burnInterval iterations and run a burnInterval phase. - - The initial sample. - The density of the distribution we want to sample from. - The number of iterations in between returning samples. - The scale factor of the slice sampler. - When the number of burnInterval iteration is negative. - When the scale of the slice sampler is not positive. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the scale of the slice sampler. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Running statistics over a window of data, allows updating by adding values. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - Replace ties with their mean (non-integer ranks). Default. - - - Replace ties with their minimum (typical sports ranking). - - - Replace ties with their maximum. - - - Permutation with increasing values at each index of ties. - - - - Running statistics accumulator, allows updating by adding values - or by combining two accumulators. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Evaluates the population skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - - - - Evaluates the population kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - - Create a new running statistics over the combined samples of two existing running statistics. - - - - - Statistics operating on an array already sorted ascendingly. - - - - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Extension methods to return basic statistics on set of data. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - The full population data. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - The full population data. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. - Uses a normalizer (Bessel's correction; type 2). - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness and kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - - The full population data. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - The full population data. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - Null-entries are ignored. - - The data sample sequence. - - - - Evaluates the sample mean over a moving window, for each samples. - Returns NaN if no data is empty or if any entry is NaN. - - The sample stream to calculate the mean of. - The number of last samples to consider. - - - - Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. - Can be used in a streaming way, e.g. on large datasets not fitting into memory. - - - - - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Calculates the entropy of a stream of double values. - Returns NaN if any of the values in the stream are NaN. - - The input stream to evaluate. - - - - - Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. - - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The body to be invoked for each iteration range. - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The partition size for splitting work into smaller pieces. - The body to be invoked for each iteration range. - - - - Executes each of the provided actions inside a discrete, asynchronous task. - - An array of actions to execute. - The actions array contains a null element. - At least one invocation of the actions threw an exception. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Double-precision trigonometry toolkit. - - - - - Constant to convert a degree to grad. - - - - - Converts a degree (360-periodic) angle to a grad (400-periodic) angle. - - The degree to convert. - The converted grad angle. - - - - Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. - - The degree to convert. - The converted radian angle. - - - - Converts a grad (400-periodic) angle to a degree (360-periodic) angle. - - The grad to convert. - The converted degree. - - - - Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. - - The grad to convert. - The converted radian. - - - - Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. - - The radian to convert. - The converted degree. - - - - Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. - - The radian to convert. - The converted grad. - - - - Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). - - - - - Trigonometric Sine of an angle in radian, or opposite / hypotenuse. - - The angle in radian. - The sine of the radian angle. - - - - Trigonometric Sine of a Complex number. - - The complex value. - The sine of the complex number. - - - - Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. - - The angle in radian. - The cosine of an angle in radian. - - - - Trigonometric Cosine of a Complex number. - - The complex value. - The cosine of a complex number. - - - - Trigonometric Tangent of an angle in radian, or opposite / adjacent. - - The angle in radian. - The tangent of the radian angle. - - - - Trigonometric Tangent of a Complex number. - - The complex value. - The tangent of the complex number. - - - - Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. - - The angle in radian. - The cotangent of an angle in radian. - - - - Trigonometric Cotangent of a Complex number. - - The complex value. - The cotangent of the complex number. - - - - Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. - - The angle in radian. - The secant of the radian angle. - - - - Trigonometric Secant of a Complex number. - - The complex value. - The secant of the complex number. - - - - Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. - - The angle in radian. - Cosecant of an angle in radian. - - - - Trigonometric Cosecant of a Complex number. - - The complex value. - The cosecant of a complex number. - - - - Trigonometric principal Arc Sine in radian - - The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Sine of this Complex number. - - The complex value. - The arc sine of a complex number. - - - - Trigonometric principal Arc Cosine in radian - - The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Cosine of this Complex number. - - The complex value. - The arc cosine of a complex number. - - - - Trigonometric principal Arc Tangent in radian - - The opposite for a unit adjacent (i.e. opposite / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Tangent of this Complex number. - - The complex value. - The arc tangent of a complex number. - - - - Trigonometric principal Arc Cotangent in radian - - The adjacent for a unit opposite (i.e. adjacent / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cotangent of this Complex number. - - The complex value. - The arc cotangent of a complex number. - - - - Trigonometric principal Arc Secant in radian - - The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Secant of this Complex number. - - The complex value. - The arc secant of a complex number. - - - - Trigonometric principal Arc Cosecant in radian - - The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cosecant of this Complex number. - - The complex value. - The arc cosecant of a complex number. - - - - Hyperbolic Sine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic sine of the angle. - - - - Hyperbolic Sine of a Complex number. - - The complex value. - The hyperbolic sine of a complex number. - - - - Hyperbolic Cosine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic Cosine of the angle. - - - - Hyperbolic Cosine of a Complex number. - - The complex value. - The hyperbolic cosine of a complex number. - - - - Hyperbolic Tangent in radian - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic tangent of the angle. - - - - Hyperbolic Tangent of a Complex number. - - The complex value. - The hyperbolic tangent of a complex number. - - - - Hyperbolic Cotangent - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cotangent of the angle. - - - - Hyperbolic Cotangent of a Complex number. - - The complex value. - The hyperbolic cotangent of a complex number. - - - - Hyperbolic Secant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic secant of the angle. - - - - Hyperbolic Secant of a Complex number. - - The complex value. - The hyperbolic secant of a complex number. - - - - Hyperbolic Cosecant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cosecant of the angle. - - - - Hyperbolic Cosecant of a Complex number. - - The complex value. - The hyperbolic cosecant of a complex number. - - - - Hyperbolic Area Sine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Sine of this Complex number. - - The complex value. - The hyperbolic arc sine of a complex number. - - - - Hyperbolic Area Cosine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosine of this Complex number. - - The complex value. - The hyperbolic arc cosine of a complex number. - - - - Hyperbolic Area Tangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Tangent of this Complex number. - - The complex value. - The hyperbolic arc tangent of a complex number. - - - - Hyperbolic Area Cotangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cotangent of this Complex number. - - The complex value. - The hyperbolic arc cotangent of a complex number. - - - - Hyperbolic Area Secant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Secant of this Complex number. - - The complex value. - The hyperbolic arc secant of a complex number. - - - - Hyperbolic Area Cosecant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosecant of this Complex number. - - The complex value. - The hyperbolic arc cosecant of a complex number. - - - - Hamming window. Named after Richard Hamming. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hamming window. Named after Richard Hamming. - Periodic version, useful e.g. for FFT purposes. - - - - - Hann window. Named after Julius von Hann. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hann window. Named after Julius von Hann. - Periodic version, useful e.g. for FFT purposes. - - - - - Cosine window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Cosine window. - Periodic version, useful e.g. for FFT purposes. - - - - - Lanczos window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Lanczos window. - Periodic version, useful e.g. for FFT purposes. - - - - - Gauss window. - - - - - Blackman window. - - - - - Blackman-Harris window. - - - - - Blackman-Nuttall window. - - - - - Bartlett window. - - - - - Bartlett-Hann window. - - - - - Nuttall window. - - - - - Flat top window. - - - - - Uniform rectangular (Dirichlet) window. - - - - - Triangular window. - - - - - Tukey tapering window. A rectangular window bounded - by half a cosine window on each side. - - Width of the window - Fraction of the window occupied by the cosine parts - -
-
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll deleted file mode 100755 index 706a8ae..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml deleted file mode 100755 index 5f9e8af..0000000 --- a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml +++ /dev/null @@ -1,57152 +0,0 @@ - - - - MathNet.Numerics - - - - - Useful extension methods for Arrays. - - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Enumerative Combinatorics and Counting. - - - - - Count the number of possible variations without repetition. - The order matters and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of distinct variations. - - - - Count the number of possible variations with repetition. - The order matters and each object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of distinct variations with repetition. - - - - Count the number of possible combinations without repetition. - The order does not matter and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of combinations. - - - - Count the number of possible combinations with repetition. - The order does not matter and an object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of combinations with repetition. - - - - Count the number of possible permutations (without repetition). - - Number of (distinguishable) elements in the set. - Maximum number of permutations without repetition. - - - - Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. - Implemented using Fisher-Yates Shuffling. - - An array of length N that contains (in any order) the integers of the interval [0, N). - Number of (distinguishable) elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation, without repetition, from a data array by reordering the provided array in-place. - Implemented using Fisher-Yates Shuffling. The provided data array will be modified. - - The data array to be reordered. The array will be modified by this routine. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation from a data sequence by returning the provided data in random order. - Implemented using Fisher-Yates Shuffling. - - The data elements to be reordered. - The random number generator to use. Optional; the default random source will be used if null. - - - - Generate a random combination, without repetition, by randomly selecting some of N elements. - - Number of elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Generate a random combination, without repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Select a random combination, without repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination, in the original order. - - - - Generates a random combination, with repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - Integer mask array of length N, for each item the number of times it was selected. - - - - Select a random combination, with repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination with repetition, in the original order. - - - - Generate a random variation, without repetition, by randomly selecting k of n elements with order. - Implemented using partial Fisher-Yates Shuffling. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. - Implemented using partial Fisher-Yates Shuffling. - - The data source to choose from. - Number of elements (k) to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation, in random order. - - - - Generate a random variation, with repetition, by randomly selecting k of n elements with order. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation with repetition, in random order. - - - - 32-bit single precision complex numbers class. - - - - The class Complex32 provides all elementary operations - on complex numbers. All the operators +, -, - *, /, ==, != are defined in the - canonical way. Additional complex trigonometric functions - are also provided. Note that the Complex32 structures - has two special constant values and - . - - - - Complex32 x = new Complex32(1f,2f); - Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); - Complex32 z = (x + y) / (x - y); - - - - For mathematical details about complex numbers, please - have a look at the - Wikipedia - - - - - - The real component of the complex number. - - - - - The imaginary component of the complex number. - - - - - Initializes a new instance of the Complex32 structure with the given real - and imaginary parts. - - The value for the real component. - The value for the imaginary component. - - - - Creates a complex number from a point's polar coordinates. - - A complex number. - The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. - The phase, which is the angle from the line to the horizontal axis, measured in radians. - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to one and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to one. - - - - - Returns a new instance - with real and imaginary numbers positive infinite. - - - - - Returns a new instance - with real and imaginary numbers not a number. - - - - - Gets the real component of the complex number. - - The real component of the complex number. - - - - Gets the real imaginary component of the complex number. - - The real imaginary component of the complex number. - - - - Gets the phase or argument of this Complex32. - - - Phase always returns a value bigger than negative Pi and - smaller or equal to Pi. If this Complex32 is zero, the Complex32 - is assumed to be positive real with an argument of zero. - - The phase or argument of this Complex32 - - - - Gets the magnitude (or absolute value) of a complex number. - - Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN - The magnitude of the current instance. - - - - Gets the squared magnitude (or squared absolute value) of a complex number. - - The squared magnitude of the current instance. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex32. - - - - Gets a value indicating whether the Complex32 is zero. - - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - - true if this instance is ; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - - true if this instance is real nonnegative number; otherwise, false. - - - - - Exponential of this Complex32 (exp(x), E^x). - - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex32 (Base E). - - The natural logarithm of this complex number. - - - - Common Logarithm of this Complex32 (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex32 with custom base. - - The logarithm of this complex number. - - - - Raise this Complex32 to the given value. - - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex32 to the inverse of the given value. - - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex32 - - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex32 - - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex32. - - - - - Evaluate all cubic roots of this Complex32. - - - - - Equality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real and imaginary components of the two complex numbers are equal; false otherwise. - - - - Inequality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real or imaginary components of the two complex numbers are not equal; false otherwise. - - - - Unary addition. - - The complex number to operate on. - Returns the same complex number. - - - - Unary minus. - - The complex number to operate on. - The negated value of the . - - - Addition operator. Adds two complex numbers together. - The result of the addition. - One of the complex numbers to add. - The other complex numbers to add. - - - Subtraction operator. Subtracts two complex numbers. - The result of the subtraction. - The complex number to subtract from. - The complex number to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The complex numbers to add. - The float value to add. - - - Subtraction operator. Subtracts float value from a complex value. - The result of the subtraction. - The complex number to subtract from. - The float value to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The float value to add. - The complex numbers to add. - - - Subtraction operator. Subtracts complex value from a float value. - The result of the subtraction. - The float vale to subtract from. - The complex value to subtract. - - - Multiplication operator. Multiplies two complex numbers. - The result of the multiplication. - One of the complex numbers to multiply. - The other complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The float value to multiply. - The complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The complex number to multiply. - The float value to multiply. - - - Division operator. Divides a complex number by another. - Enhanced Smith's algorithm for dividing two complex numbers - - The result of the division. - The dividend. - The divisor. - - - - Helper method for dividing. - - Re first - Im first - Re second - Im second - - - - - Division operator. Divides a float value by a complex number. - Algorithm based on Smith's algorithm - - The result of the division. - The dividend. - The divisor. - - - Division operator. Divides a complex number by a float value. - The result of the division. - The dividend. - The divisor. - - - - Computes the conjugate of a complex number and returns the result. - - - - - Returns the multiplicative inverse of a complex number. - - - - - Converts the value of the current complex number to its equivalent string representation in Cartesian form. - - The string representation of the current instance in Cartesian form. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format for its real and imaginary parts. - - The string representation of the current instance in Cartesian form. - A standard or custom numeric format string. - - is not a valid format string. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified culture-specific formatting information. - - The string representation of the current instance in Cartesian form, as specified by . - An object that supplies culture-specific formatting information. - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. - The string representation of the current instance in Cartesian form, as specified by and . - A standard or custom numeric format string. - An object that supplies culture-specific formatting information. - - is not a valid format string. - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - The hash code for the complex number. - - - The hash code of the complex number. - - - The hash code is calculated as - System.Math.Exp(ComplexMath.Absolute(complexNumber)). - - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as float. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Explicit conversion of a real decimal to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Explicit conversion of a Complex to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Implicit conversion of a real byte to a Complex32. - - The byte value to convert. - The result of the conversion. - - - - Implicit conversion of a real short to a Complex32. - - The short value to convert. - The result of the conversion. - - - - Implicit conversion of a signed byte to a Complex32. - - The signed byte value to convert. - The result of the conversion. - - - - Implicit conversion of a unsigned real short to a Complex32. - - The unsigned short value to convert. - The result of the conversion. - - - - Implicit conversion of a real int to a Complex32. - - The int value to convert. - The result of the conversion. - - - - Implicit conversion of a BigInteger int to a Complex32. - - The BigInteger value to convert. - The result of the conversion. - - - - Implicit conversion of a real long to a Complex32. - - The long value to convert. - The result of the conversion. - - - - Implicit conversion of a real uint to a Complex32. - - The uint value to convert. - The result of the conversion. - - - - Implicit conversion of a real ulong to a Complex32. - - The ulong value to convert. - The result of the conversion. - - - - Implicit conversion of a real float to a Complex32. - - The float value to convert. - The result of the conversion. - - - - Implicit conversion of a real double to a Complex32. - - The double value to convert. - The result of the conversion. - - - - Converts this Complex32 to a . - - A with the same values as this Complex32. - - - - Returns the additive inverse of a specified complex number. - - The result of the real and imaginary components of the value parameter multiplied by -1. - A complex number. - - - - Computes the conjugate of a complex number and returns the result. - - The conjugate of . - A complex number. - - - - Adds two complex numbers and returns the result. - - The sum of and . - The first complex number to add. - The second complex number to add. - - - - Subtracts one complex number from another and returns the result. - - The result of subtracting from . - The value to subtract from (the minuend). - The value to subtract (the subtrahend). - - - - Returns the product of two complex numbers. - - The product of the and parameters. - The first complex number to multiply. - The second complex number to multiply. - - - - Divides one complex number by another and returns the result. - - The quotient of the division. - The complex number to be divided. - The complex number to divide by. - - - - Returns the multiplicative inverse of a complex number. - - The reciprocal of . - A complex number. - - - - Returns the square root of a specified complex number. - - The square root of . - A complex number. - - - - Gets the absolute value (or magnitude) of a complex number. - - The absolute value of . - A complex number. - - - - Returns e raised to the power specified by a complex number. - - The number e raised to the power . - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a complex number. - - The complex number raised to the power . - A complex number to be raised to a power. - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a single-precision floating-point number. - - The complex number raised to the power . - A complex number to be raised to a power. - A single-precision floating-point number that specifies a power. - - - - Returns the natural (base e) logarithm of a specified complex number. - - The natural (base e) logarithm of . - A complex number. - - - - Returns the logarithm of a specified complex number in a specified base. - - The logarithm of in base . - A complex number. - The base of the logarithm. - - - - Returns the base-10 logarithm of a specified complex number. - - The base-10 logarithm of . - A complex number. - - - - Returns the sine of the specified complex number. - - The sine of . - A complex number. - - - - Returns the cosine of the specified complex number. - - The cosine of . - A complex number. - - - - Returns the tangent of the specified complex number. - - The tangent of . - A complex number. - - - - Returns the angle that is the arc sine of the specified complex number. - - The angle which is the arc sine of . - A complex number. - - - - Returns the angle that is the arc cosine of the specified complex number. - - The angle, measured in radians, which is the arc cosine of . - A complex number that represents a cosine. - - - - Returns the angle that is the arc tangent of the specified complex number. - - The angle that is the arc tangent of . - A complex number. - - - - Returns the hyperbolic sine of the specified complex number. - - The hyperbolic sine of . - A complex number. - - - - Returns the hyperbolic cosine of the specified complex number. - - The hyperbolic cosine of . - A complex number. - - - - Returns the hyperbolic tangent of the specified complex number. - - The hyperbolic tangent of . - A complex number. - - - - Extension methods for the Complex type provided by System.Numerics - - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex. - - - - Gets the conjugate of the Complex number. - - The number to perform this operation on. - - The semantic of setting the conjugate is such that - - // a, b of type Complex32 - a.Conjugate = b; - - is equivalent to - - // a, b of type Complex32 - a = b.Conjugate - - - The conjugate of the number. - - - - Returns the multiplicative inverse of a complex number. - - - - - Exponential of this Complex (exp(x), E^x). - - The number to perform this operation on. - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex (Base E). - - The number to perform this operation on. - - The natural logarithm of this complex number. - - - - - Common Logarithm of this Complex (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex with custom base. - - The logarithm of this complex number. - - - - Raise this Complex to the given value. - - The number to perform this operation on. - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex to the inverse of the given value. - - The number to perform this operation on. - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex - - The number to perform this operation on. - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex - - The number to perform this operation on. - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex. - - - - - Evaluate all cubic roots of this Complex. - - - - - Gets a value indicating whether the Complex32 is zero. - - The number to perform this operation on. - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - The number to perform this operation on. - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - The number to perform this operation on. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - The number to perform this operation on. - - true if this instance is NaN; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - The number to perform this operation on. - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - The number to perform this operation on. - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - The number to perform this operation on. - - true if this instance is real nonnegative number; otherwise, false. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - The string to parse. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as double. - - - - - Converts the string representation of a complex number to a double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - A collection of frequently used mathematical constants. - - - - The number e - - - The number log[2](e) - - - The number log[10](e) - - - The number log[e](2) - - - The number log[e](10) - - - The number log[e](pi) - - - The number log[e](2*pi)/2 - - - The number 1/e - - - The number sqrt(e) - - - The number sqrt(2) - - - The number sqrt(3) - - - The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 - - - The number sqrt(3)/2 - - - The number pi - - - The number pi*2 - - - The number pi/2 - - - The number pi*3/2 - - - The number pi/4 - - - The number sqrt(pi) - - - The number sqrt(2pi) - - - The number sqrt(pi/2) - - - The number sqrt(2*pi*e) - - - The number log(sqrt(2*pi)) - - - The number log(sqrt(2*pi*e)) - - - The number log(2 * sqrt(e / pi)) - - - The number 1/pi - - - The number 2/pi - - - The number 1/sqrt(pi) - - - The number 1/sqrt(2pi) - - - The number 2/sqrt(pi) - - - The number 2 * sqrt(e / pi) - - - The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). - - - - - The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). - - - - - The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). - - - The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. - - - The Catalan constant - Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } - - - The Euler-Mascheroni constant - lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } - - - The number (1+sqrt(5))/2, also known as the golden ratio - - - The Glaisher constant - e^(1/12 - Zeta(-1)) - - - The Khinchin constant - prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} - - - - The size of a double in bytes. - - - - - The size of an int in bytes. - - - - - The size of a float in bytes. - - - - - The size of a Complex in bytes. - - - - - The size of a Complex in bytes. - - - - Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) - - - Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) - - - Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) - - - Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) - - - Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) - - - Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) - - - Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) - - - Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) - - - Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) - - - Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) - - - Planck time: t_p = l_p/c_0 [s] (2007 CODATA) - - - Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) - - - Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) - - - Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) - - - Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) - - - Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) - - - Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) - - - Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) - - - Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) - - - Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) - - - Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) - - - Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) - - - Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) - - - Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) - - - Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) - - - Electron Mass: [kg] (2007 CODATA) - - - Electron Mass Energy Equivalent: [J] (2007 CODATA) - - - Electron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Electron Compton Wavelength: [m] (2007 CODATA) - - - Classical Electron Radius: [m] (2007 CODATA) - - - Thomson Cross Section: [m^2] (2002 CODATA) - - - Electron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Electon G-Factor: [1] (2007 CODATA) - - - Muon Mass: [kg] (2007 CODATA) - - - Muon Mass Energy Equivalent: [J] (2007 CODATA) - - - Muon Molar Mass: [kg mol^-1] (2007 CODATA) - - - Muon Compton Wavelength: [m] (2007 CODATA) - - - Muon Magnetic Moment: [J T^-1] (2007 CODATA) - - - Muon G-Factor: [1] (2007 CODATA) - - - Tau Mass: [kg] (2007 CODATA) - - - Tau Mass Energy Equivalent: [J] (2007 CODATA) - - - Tau Molar Mass: [kg mol^-1] (2007 CODATA) - - - Tau Compton Wavelength: [m] (2007 CODATA) - - - Proton Mass: [kg] (2007 CODATA) - - - Proton Mass Energy Equivalent: [J] (2007 CODATA) - - - Proton Molar Mass: [kg mol^-1] (2007 CODATA) - - - Proton Compton Wavelength: [m] (2007 CODATA) - - - Proton Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton G-Factor: [1] (2007 CODATA) - - - Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Neutron Mass: [kg] (2007 CODATA) - - - Neutron Mass Energy Equivalent: [J] (2007 CODATA) - - - Neutron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Neuron Compton Wavelength: [m] (2007 CODATA) - - - Neutron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Neutron G-Factor: [1] (2007 CODATA) - - - Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Deuteron Mass: [kg] (2007 CODATA) - - - Deuteron Mass Energy Equivalent: [J] (2007 CODATA) - - - Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Helion Mass: [kg] (2007 CODATA) - - - Helion Mass Energy Equivalent: [J] (2007 CODATA) - - - Helion Molar Mass: [kg mol^-1] (2007 CODATA) - - - Avogadro constant: [mol^-1] (2010 CODATA) - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 - - - The SI prefix factor corresponding to 1 000 - - - The SI prefix factor corresponding to 100 - - - The SI prefix factor corresponding to 10 - - - The SI prefix factor corresponding to 0.1 - - - The SI prefix factor corresponding to 0.01 - - - The SI prefix factor corresponding to 0.001 - - - The SI prefix factor corresponding to 0.000 001 - - - The SI prefix factor corresponding to 0.000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 - - - - Sets parameters for the library. - - - - - Use a specific provider if configured, e.g. using - environment variables, or fall back to the best providers. - - - - - Use the best provider available. - - - - - Use the Intel MKL native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Intel MKL native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the Nvidia CUDA native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Nvidia CUDA native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the OpenBLAS native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the OpenBLAS native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Try to use any available native provider in an undefined order. - - - True if one of the native providers was found and successfully initialized. - False if it failed and the previous provider is still active. - - - - - Gets or sets a value indicating whether the distribution classes check validate each parameter. - For the multivariate distributions this could involve an expensive matrix factorization. - The default setting of this property is true. - - - - - Gets or sets a value indicating whether to use thread safe random number generators (RNG). - Thread safe RNG about two and half time slower than non-thread safe RNG. - - - true to use thread safe random number generators ; otherwise, false. - - - - - Optional path to try to load native provider binaries from. - - - - - Gets or sets a value indicating how many parallel worker threads shall be used - when parallelization is applicable. - - Default to the number of processor cores, must be between 1 and 1024 (inclusive). - - - - Gets or sets the TaskScheduler used to schedule the worker tasks. - - - - - Gets or sets the order of the matrix when linear algebra provider - must calculate multiply in parallel threads. - - The order. Default 64, must be at least 3. - - - - Gets or sets the number of elements a vector or matrix - must contain before we multiply threads. - - Number of elements. Default 300, must be at least 3. - - - - Numerical Derivative. - - - - - Initialized a NumericalDerivative with the given points and center. - - - - - Initialized a NumericalDerivative with the default points and center for the given order. - - - - - Evaluates the derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - Derivative order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Univariate function handle. - Derivative order. - - - - Evaluates the first derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the first derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the second derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the second derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - - - - Evaluates the partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - - - - Class to calculate finite difference coefficients using Taylor series expansion method. - - - For n points, coefficients are calculated up to the maximum derivative order possible (n-1). - The current function value position specifies the "center" for surrounding coefficients. - Selecting the first, middle or last positions represent forward, backwards and central difference methods. - - - - - - - Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. - - - - - Initializes a new instance of the class. - - Number of finite difference coefficients. - - - - Gets the finite difference coefficients for a specified center and order. - - Current function position with respect to coefficients. Must be within point range. - Order of finite difference coefficients. - Vector of finite difference coefficients. - - - - Gets the finite difference coefficients for all orders at a specified center. - - Current function position with respect to coefficients. Must be within point range. - Rectangular array of coefficients, with columns specifying order. - - - - Type of finite different step size. - - - - - The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. - - - - - A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however - this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the - function input parameter and not the order of the finite difference derivative. - - - - - A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order - and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a - base step size, h, that is equivalent to scaling. This step size is then scaled according to the function - input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). - - - - - Class to evaluate the numerical derivative of a function using finite difference approximations. - Variable point and center methods can be initialized . - This class can also be used to return function handles (delegates) for a fixed derivative order and variable. - It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. - - - - - Initializes a NumericalDerivative class with the default 3 point center difference method. - - - - - Initialized a NumericalDerivative class. - - Number of points for finite difference derivatives. - Location of the center with respect to other points. Value ranges from zero to points-1. - - - - Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. - If the base step size used in scaling is desired, see . - - - Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a - base-2 representable number to improve finite difference accuracy. - - - - - Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. - However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. - - - - - Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. - By default this is set to machine epsilon, from which is computed. - - - - - Sets and gets the location of the center point for the finite difference derivative. - - - - - Number of times a function is evaluated for numerical derivatives. - - - - - Type of step size for computing finite differences. If set to absolute, dx = h. - If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when - h is approximately equal to the square-root of machine accuracy, epsilon. - - - - - Evaluates the derivative of equidistant points using the finite difference method. - - Vector of points StepSize apart. - Derivative order. - Finite difference step size. - Derivative of points of the specified order. - - - - Evaluates the derivative of a scalar univariate function. - - - Supplying the optional argument currentValue will reduce the number of function evaluations - required to calculate the finite difference derivative. - - Function handle. - Point at which to compute the derivative. - Derivative order. - Current function value at center. - Function derivative at x of the specified order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Input function handle. - Derivative order. - Function handle that evaluates the derivative of input function at a fixed order. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Function partial derivative at x of the specified order. - - - - Evaluates the partial derivatives of a multivariate function array. - - - This function assumes the input vector x is of the correct length for f. - - Multivariate vector function array handle. - Vector at which to evaluate the derivatives. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Vector of functions partial derivatives at x of the specified order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at a fixed order. - - - - Creates a function handle for the partial derivative of a vector multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at fixed order. - - - - Evaluates the mixed partial derivative of variable order for multivariate functions. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function handle. - Points at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivative at x of the specified order. - - - - Evaluates the mixed partial derivative of variable order for multivariate function arrays. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function array handle. - Vector at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivatives at x of the specified order. - - - - Creates a function handle for the mixed partial derivative of a multivariate function. - - Input function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Creates a function handle for the mixed partial derivative of a multivariate vector function. - - Input vector function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Resets the evaluation counter. - - - - - Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Hessian object with a three point central difference method. - - - - - Creates a numerical Hessian with a specified differentiation scheme. - - Number of points for Hessian evaluation. - Center point for differentiation. - - - - Evaluates the Hessian of the scalar univariate function f at points x. - - Scalar univariate function handle. - Point at which to evaluate Hessian. - Hessian tensor. - - - - Evaluates the Hessian of a multivariate function f at points x. - - - This method of computing the Hessian is only valid for Lipschitz continuous functions. - The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. - - Multivariate function handle.> - Points at which to evaluate Hessian.> - Hessian tensor. - - - - Resets the function evaluation counter for the Hessian. - - - - - Class for evaluating the Jacobian of a function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Jacobian object with a three point central difference method. - - - - - Creates a numerical Jacobian with a specified differentiation scheme. - - Number of points for Jacobian evaluation. - Center point for differentiation. - - - - Evaluates the Jacobian of scalar univariate function f at point x. - - Scalar univariate function handle. - Point at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x. - - - This function assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x given a current function value. - - - To minimize the number of function evaluations, a user can supply the current value of the function - to be used in computing the Jacobian. This value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Current function value at finite difference center. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function array f at vector x. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Jacobian matrix. - - - - Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. - - - To minimize the number of function evaluations, a user can supply a vector of current values of the functions - to be used in computing the Jacobian. These value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Vector of current function values. - Jacobian matrix. - - - - Resets the function evaluation counter for the Jacobian. - - - - - Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Double-Exponential integration. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The number of Gauss-Legendre points. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Gauss-Kronrod integration. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the differintegral of order n at x. - - - - Metrics to measure the distance between two structures. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Pearson's distance, i.e. 1 - the person correlation coefficient. - - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Discrete Univariate Bernoulli distribution. - The Bernoulli distribution is a distribution over bits. The parameter - p specifies the probability that a 1 is generated. - Wikipedia - Bernoulli distribution. - - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - If the Bernoulli parameter is not in the range [0,1]. - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - If the Bernoulli parameter is not in the range [0,1]. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Generates one sample from the Bernoulli distribution. - - The random source to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A random sample from the Bernoulli distribution. - - - - Samples a Bernoulli distributed random variable. - - A sample from the Bernoulli distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Continuous Univariate Beta distribution. - For details about this distribution, see - Wikipedia - Beta distribution. - - - There are a few special cases for the parameterization of the Beta distribution. When both - shape parameters are positive infinity, the Beta distribution degenerates to a point distribution - at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point - distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution - degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the - distribution degenerates to a point distribution at the non-zero shape parameter. - - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - A string representation of the Beta distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. - - - - - Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Beta distribution. - - - - - Gets the variance of the Beta distribution. - - - - - Gets the standard deviation of the Beta distribution. - - - - - Gets the entropy of the Beta distribution. - - - - - Gets the skewness of the Beta distribution. - - - - - Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the Beta distribution. - - - - - Gets the minimum of the Beta distribution. - - - - - Gets the maximum of the Beta distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Beta distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Beta distribution. - - a sequence of samples from the distribution. - - - - Samples Beta distributed random variables by sampling two Gamma variables and normalizing. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a random number from the Beta distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Beta-Binomial distribution. - The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising - when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. - The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. - It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. - Wikipedia - Beta-Binomial distribution. - - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a random number from the BetaBinomial distribution. - - - - Samples a BetaBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of BetaBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a BetaBinomial distributed random variable. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Samples an array of BetaBinomial distributed random variables. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - - - - Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast - is used to construct an underlying beta distribution. - - The minimum value. - The maximum value. - The most likely value (mode). - The random number generator which is used to draw random samples. - The Beta distribution derived from the PERT parameters. - - - - A string representation of the distribution. - - A string representation of the BetaScaled distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. - - - - - Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. - - - - - Gets the location (μ) of the BetaScaled distribution. - - - - - Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the BetaScaled distribution. - - - - - Gets the variance of the BetaScaled distribution. - - - - - Gets the standard deviation of the BetaScaled distribution. - - - - - Gets the entropy of the BetaScaled distribution. - - - - - Gets the skewness of the BetaScaled distribution. - - - - - Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the BetaScaled distribution. - - - - - Gets the minimum of the BetaScaled distribution. - - - - - Gets the maximum of the BetaScaled distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Binomial distribution. - For details about this distribution, see - Wikipedia - Binomial distribution. - - - The distribution is parameterized by a probability (between 0.0 and 1.0). - - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - If is not in the interval [0.0,1.0]. - If is negative. - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The random number generator which is used to draw random samples. - If is not in the interval [0.0,1.0]. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - - - - Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. - - - - - Gets the number of trials. Range: n ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the Binomial distribution without doing parameter checking. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successful trials. - - - - Samples a Binomially distributed random variable. - - The number of successes in N trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Binomially distributed random variables. - - a sequence of successes in N trials. - - - - Samples a binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Samples a binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Gets the scale (a) of the distribution. Range: a > 0. - - - - - Gets the first shape parameter (c) of the distribution. Range: c > 0. - - - - - Gets the second shape parameter (k) of the distribution. Range: k > 0. - - - - - Initializes a new instance of the Burr Type XII class. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Burr distribution. - - - - - Gets the variance of the Burr distribution. - - - - - Gets the standard deviation of the Burr distribution. - - - - - Gets the mode of the Burr distribution. - - - - - Gets the minimum of the Burr distribution. - - - - - Gets the maximum of the Burr distribution. - - - - - Gets the entropy of the Burr distribution (currently not supported). - - - - - Gets the skewness of the Burr distribution. - - - - - Gets the median of the Burr distribution. - - - - - Generates a sample from the Burr distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the Burr distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Discrete Univariate Categorical distribution. - For details about this distribution, see - Wikipedia - Categorical distribution. This - distribution is sometimes called the Discrete distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - Support: 0..k where k = length(probability mass array)-1 - - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class from a . The distribution - will not be automatically updated when the histogram changes. The categorical distribution will have - one value for each bucket and a probability for that value proportional to the bucket count. - - The histogram from which to create the categorical variable. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Gets the probability mass vector (non-negative ratios) of the multinomial. - - Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a . - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets he mode of the distribution. - - Throws a . - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the cumulative distribution function. This method performs no parameter checking. - If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - An array representing the unnormalized cumulative distribution function. - - - - Returns one trials from the categorical distribution. - - The random number generator to use. - The (unnormalized) cumulative distribution of the probability distribution. - One sample from the categorical distribution implied by . - - - - Samples a Binomially distributed random variable. - - The number of successful trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of successful trial counts. - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Continuous Univariate Cauchy distribution. - The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see - Wikipedia - Cauchy distribution. - - - - - Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 - - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Gets the location (x0) of the distribution. - - - - - Gets the scale (γ) of the distribution. Range: γ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi distribution. - This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal - components are independent and each follow a standard normal distribution. The length of the vector will - then have a chi distribution. - Wikipedia - Chi distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Chi distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Chi distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi-Squared distribution. - This distribution is a sum of the squares of k independent standard normal random variables. - Wikipedia - ChiSquare distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ChiSquare distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ChiSquare distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - Generates a sample from the ChiSquare distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sample from the ChiSquare distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Continuous Univariate Uniform distribution. - The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see - Wikipedia - Continuous uniform distribution. - - - - - Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. - - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - If the upper bound is smaller than the lower bound. - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - If the upper bound is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Generates a sample from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Discrete Univariate Conway-Maxwell-Poisson distribution. - The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli - distributions. It is parameterized by two real numbers "lambda" and "nu". For - - nu = 0 the distribution reverts to a Geometric distribution - nu = 1 the distribution reverts to the Poisson distribution - nu -> infinity the distribution converges to a Bernoulli distribution - - This implementation will cache the value of the normalization constant. - Wikipedia - ConwayMaxwellPoisson distribution. - - - - - The mean of the distribution. - - - - - The variance of the distribution. - - - - - Caches the value of the normalization constant. - - - - - Since many properties of the distribution can only be computed approximately, the tolerance - level specifies how much error we accept. - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Gets the lambda (λ) parameter. Range: λ > 0. - - - - - Gets the rate of decay (ν) parameter. Range: ν ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the cumulative distribution at location . - - - - - Gets the normalization constant of the Conway-Maxwell-Poisson distribution. - - - - - Computes an approximate normalization constant for the CMP distribution. - - The lambda (λ) parameter for the CMP distribution. - The rate of decay (ν) parameter for the CMP distribution. - - an approximate normalization constant for the CMP distribution. - - - - - Returns one trials from the distribution. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The z parameter. - - One sample from the distribution implied by , , and . - - - - - Samples a Conway-Maxwell-Poisson distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. - - - a sequence of samples from a Conway-Maxwell-Poisson distribution. - - - - - Samples a random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Multivariate Dirichlet distribution. For details about this distribution, see - Wikipedia - Dirichlet distribution. - - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - No parameter can be less than zero and at least one parameter should be larger than zero. - - The parameters of the Dirichlet distribution. - - - - Gets or sets the parameters of the Dirichlet distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the dimension of the Dirichlet distribution. - - - - - Gets the sum of the Dirichlet parameters. - - - - - Gets the mean of the Dirichlet distribution. - - - - - Gets the variance of the Dirichlet distribution. - - - - - Gets the entropy of the distribution. - - - - - Computes the density of the distribution. - - The locations at which to compute the density. - the density at . - The Dirichlet distribution requires that the sum of the components of x equals 1. - You can also leave out the last component, and it will be computed from the others. - - - - Computes the log density of the distribution. - - The locations at which to compute the density. - the density at . - - - - Samples a Dirichlet distributed random vector. - - A sample from this distribution. - - - - Samples a Dirichlet distributed random vector. - - The random number generator to use. - The Dirichlet distribution parameter. - a sample from the distribution. - - - - Discrete Univariate Uniform distribution. - The discrete uniform distribution is a distribution over integers. The distribution - is parameterized by a lower and upper bound (both inclusive). - Wikipedia - Discrete uniform distribution. - - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Gets the inclusive lower bound of the probability distribution. - - - - - Gets the inclusive upper bound of the probability distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. - - The random source to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A random sample from the discrete uniform distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of uniformly distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a uniformly distributed random variable. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Samples a uniformly distributed random variable. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Continuous Univariate Erlang distribution. - This distribution is a continuous probability distribution with wide applicability primarily due to its - relation to the exponential and Gamma distributions. - Wikipedia - Erlang distribution. - - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Erlang distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The scale (μ) of the Erlang distribution. Range: μ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. - - - - - Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - - Gets the scale of the Erlang distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum value. - - - - - Gets the Maximum value. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Erlang distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Erlang distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Exponential distribution. - The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. - Wikipedia - exponential distribution. - - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Exponential distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Draws a random sample from the distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. - For details about this distribution, see - Wikipedia - FisherSnedecor distribution. - - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. - - - - - Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the FisherSnedecor distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the FisherSnedecor distribution. - - a sequence of samples from the distribution. - - - - Generates one sample from the FisherSnedecor distribution without parameter checking. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a FisherSnedecor distributed random number. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Gamma distribution. - For details about this distribution, see - Wikipedia - Gamma distribution. - - - The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want - to specify a Gamma distribution which is a point distribution we set the shape parameter to be the - location of the point distribution and the inverse scale as positive infinity. The distribution - with shape and inverse scale both zero is undefined. - - Random number generation for the Gamma distribution is based on the algorithm in: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Gamma distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Gamma distribution. Range: k ≥ 0. - The scale (θ) of the Gamma distribution. Range: θ ≥ 0 - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. - - - - - Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - - Gets or sets the scale (θ) of the Gamma distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Gamma distribution. - - - - - Gets the variance of the Gamma distribution. - - - - - Gets the standard deviation of the Gamma distribution. - - - - - Gets the entropy of the Gamma distribution. - - - - - Gets the skewness of the Gamma distribution. - - - - - Gets the mode of the Gamma distribution. - - - - - Gets the median of the Gamma distribution. - - - - - Gets the minimum of the Gamma distribution. - - - - - Gets the maximum of the Gamma distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Gamma distribution. - - a sequence of samples from the distribution. - - - - Sampling implementation based on: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - This method performs no parameter checks. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - A sample from a Gamma distributed random variable. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Geometric distribution. - The Geometric distribution is a distribution over positive integers parameterized by one positive real number. - This implementation of the Geometric distribution will never generate 0's. - Wikipedia - geometric distribution. - - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a not supported exception. - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Returns one sample from the distribution. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - One sample from the distribution implied by . - - - - Samples a Geometric distributed random variable. - - A sample from the Geometric distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Geometric distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Discrete Univariate Hypergeometric distribution. - This distribution is a discrete probability distribution that describes the number of successes in a sequence - of n draws from a finite population without replacement, just as the binomial distribution - describes the number of successes for draws with replacement - Wikipedia - Hypergeometric distribution. - - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the size of the population (N). - - - - - Gets the number of draws without replacement (n). - - - - - Gets the number successes within the population (K, M). - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the cumulative distribution at location . - - - - - Generates a sample from the Hypergeometric distribution without doing parameter checking. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The n parameter of the distribution. - a random number from the Hypergeometric distribution. - - - - Samples a Hypergeometric distributed random variable. - - The number of successes in n trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Hypergeometric distributed random variables. - - a sequence of successes in n trials. - - - - Samples a random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Continuous Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by a double. - - - - - Gets the largest element in the domain of the distribution which can be represented by a double. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Discrete Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by an integer. - - - - - Gets the largest element in the domain of the distribution which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Probability Distribution. - - - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Continuous Univariate Inverse Gamma distribution. - The inverse Gamma distribution is a distribution over the positive real numbers parameterized by - two positive parameters. - Wikipedia - InverseGamma distribution. - - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Gets or sets the shape (α) parameter. Range: α > 0. - - - - - Gets or sets The scale (β) parameter. Range: β > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Throws . - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Gets the mean (μ) of the distribution. Range: μ > 0. - - - - - Gets the shape (λ) of the distribution. Range: λ > 0. - - - - - Initializes a new instance of the InverseGaussian class. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Inverse Gaussian distribution. - - - - - Gets the variance of the Inverse Gaussian distribution. - - - - - Gets the standard deviation of the Inverse Gaussian distribution. - - - - - Gets the median of the Inverse Gaussian distribution. - No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. - - - - - Gets the minimum of the Inverse Gaussian distribution. - - - - - Gets the maximum of the Inverse Gaussian distribution. - - - - - Gets the skewness of the Inverse Gaussian distribution. - - - - - Gets the kurtosis of the Inverse Gaussian distribution. - - - - - Gets the mode of the Inverse Gaussian distribution. - - - - - Gets the entropy of the Inverse Gaussian distribution (currently not supported). - - - - - Generates a sample from the inverse Gaussian distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the inverse Gaussian distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the inverse Gaussian distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - An Inverse Gaussian distribution. - - - - Multivariate Inverse Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution - is the conjugate prior for the covariance matrix of a multivariate normal distribution. - Wikipedia - Inverse-Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. - - - - - Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. - - - - Gets the variance of the distribution. - - The variance of the distribution. - Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. - - - - Evaluates the probability density function for the inverse Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - a sample from the distribution. - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - The random number generator to use. - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - a sample from the distribution. - - - - Univariate Probability Distribution. - - - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Continuous Univariate Laplace distribution. - The Laplace distribution is a distribution over the real numbers parameterized by a mean and - scale parameter. The PDF is: - p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. - Wikipedia - Laplace distribution. - - - - - Initializes a new instance of the class (location = 0, scale = 1). - - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - If is negative. - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - - - - Gets the location (μ) of the Laplace distribution. - - - - - Gets the scale (b) of the Laplace distribution. Range: b > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples a Laplace distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sample from the Laplace distribution. - - a sample from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Log-Normal distribution. - For details about this distribution, see - Wikipedia - Log-Normal distribution. - - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the logarithm of the distribution. - The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a log-normal distribution with the desired mu and sigma parameters. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Constructs a log-normal distribution with the desired mean and variance. - - The mean of the log-normal distribution. - The variance of the log-normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Estimates the log-normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - MATLAB: lognfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - - - - Gets the log-scale (μ) (mean of the logarithm) of the distribution. - - - - - Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mu of the log-normal distribution. - - - - - Gets the variance of the log-normal distribution. - - - - - Gets the standard deviation of the log-normal distribution. - - - - - Gets the entropy of the log-normal distribution. - - - - - Gets the skewness of the log-normal distribution. - - - - - Gets the mode of the log-normal distribution. - - - - - Gets the median of the log-normal distribution. - - - - - Gets the minimum of the log-normal distribution. - - - - - Gets the maximum of the log-normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the density at . - - MATLAB: lognpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: logncdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: logninv - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Multivariate Matrix-valued Normal distributions. The distribution - is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix - for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. - Wikipedia - MatrixNormal distribution. - - - - - The mean of the matrix normal distribution. - - - - - The covariance matrix for the rows. - - - - - The covariance matrix for the columns. - - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - The random number generator which is used to draw random samples. - If the dimensions of the mean and two covariance matrices don't match. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - - - - Gets the mean. (M) - - The mean of the distribution. - - - - Gets the row covariance. (V) - - The row covariance. - - - - Gets the column covariance. (K) - - The column covariance. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Evaluates the probability density function for the matrix normal distribution. - - The matrix at which to evaluate the density at. - the density at - If the argument does not have the correct dimensions. - - - - Samples a matrix normal distributed random variable. - - A random number from this distribution. - - - - Samples a matrix normal distributed random variable. - - The random number generator to use. - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - a sequence of samples from the distribution. - - - - Samples a vector normal distributed random variable. - - The random number generator to use. - The mean of the vector normal distribution. - The covariance matrix of the vector normal distribution. - a sequence of samples from defined distribution. - - - - Multivariate Multinomial distribution. For details about this distribution, see - Wikipedia - Multinomial distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - - - Stores the normalized multinomial probabilities. - - - - - The number of trials. - - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class from histogram . The distribution will - not be automatically updated when the histogram changes. - - Histogram instance - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative returns false, - if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. - - - - Gets the proportion of ratios. - - - - - Gets the number of trials. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Computes values of the probability mass function. - - Non-negative integers x1, ..., xk - The probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Computes values of the log probability mass function. - - Non-negative integers x1, ..., xk - The log probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Samples one multinomial distributed random variable. - - the counts for each of the different possible values. - - - - Samples a sequence multinomially distributed random variables. - - a sequence of counts for each of the different possible values. - - - - Samples one multinomial distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - the counts for each of the different possible values. - - - - Samples a multinomially distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of variables needed. - a sequence of counts for each of the different possible values. - - - - Discrete Univariate Negative Binomial distribution. - The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special - case that r is an integer one can interpret the distribution as the number of failures before the r'th success - when the probability of success is p. - Wikipedia - NegativeBinomial distribution. - - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Gets the number of successes. Range: r ≥ 0. - - - - - Gets the probability of success. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Samples a negative binomial distributed random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - a sample from the distribution. - - - - Samples a NegativeBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of NegativeBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Continuous Univariate Normal distribution, also known as Gaussian distribution. - For details about this distribution, see - Wikipedia - Normal distribution. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a normal distribution from a mean and standard deviation. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - a normal distribution. - - - - Constructs a normal distribution from a mean and variance. - - The mean (μ) of the normal distribution. - The variance (σ^2) of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Constructs a normal distribution from a mean and precision. - - The mean (μ) of the normal distribution. - The precision of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Estimates the normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - MATLAB: normfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Gets the mean (μ) of the normal distribution. - - - - - Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - - Gets the variance of the normal distribution. - - - - - Gets the precision of the normal distribution. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the entropy of the normal distribution. - - - - - Gets the skewness of the normal distribution. - - - - - Gets the mode of the normal distribution. - - - - - Gets the median of the normal distribution. - - - - - Gets the minimum of the normal distribution. - - - - - Gets the maximum of the normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the density at . - - MATLAB: normpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: normcdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: norminv - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - This structure represents the type over which the distribution - is defined. - - - - - Initializes a new instance of the struct. - - The mean of the pair. - The precision of the pair. - - - - Gets or sets the mean of the pair. - - - - - Gets or sets the precision of the pair. - - - - - Multivariate Normal-Gamma Distribution. - The distribution is the conjugate prior distribution for the - distribution. It specifies a prior over the mean and precision of the distribution. - It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the - precision inverse scale. - The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). - The following degenerate cases are special: when the precision is known, - the precision shape will encode the value of the precision while the precision inverse scale is positive - infinity. When the mean is known, the mean location will encode the value of the mean while the scale - will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. - Wikipedia - Normal-Gamma distribution. - - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Gets the location of the mean. - - - - - Gets the scale of the mean. - - - - - Gets the shape of the precision. - - - - - Gets the inverse scale of the precision. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Returns the marginal distribution for the mean of the NormalGamma distribution. - - the marginal distribution for the mean of the NormalGamma distribution. - - - - Returns the marginal distribution for the precision of the distribution. - - The marginal distribution for the precision of the distribution/ - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the variance of the distribution. - - The mean of the distribution. - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - Density value - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - Density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - The log of the density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - The log of the density value - - - - Generates a sample from the NormalGamma distribution. - - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - a sequence of samples from the distribution. - - - - Generates a sample from the NormalGamma distribution. - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sequence of samples from the distribution. - - - - Continuous Univariate Pareto distribution. - The Pareto distribution is a power law probability distribution that coincides with social, - scientific, geophysical, actuarial, and many other types of observable phenomena. - For details about this distribution, see - Wikipedia - Pareto distribution. - - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - If or are negative. - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The random number generator which is used to draw random samples. - If or are negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Pareto distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Poisson distribution. - - - Distribution is described at Wikipedia - Poisson distribution. - Knuth's method is used to generate Poisson distributed random variables. - f(x) = exp(-λ)*λ^x/x!; - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - If is equal or less then 0.0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - If is equal or less then 0.0. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - - - - Gets the Poisson distribution parameter λ. Range: λ > 0. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Generates one sample from the Poisson distribution. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by Knuth's method. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by "Rejection method PA". - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, - Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) - The article is on pages 29-35. The algorithm given here is on page 32. - - - - Samples a Poisson distributed random variable. - - A sample from the Poisson distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Poisson distributed random variables. - - a sequence of successes in N trials. - - - - Samples a Poisson distributed random variable. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Samples a Poisson distributed random variable. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Rayleigh distribution. - The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an - example of how it arises, the wind speed will have a Rayleigh distribution if the components of - the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. - For details about this distribution, see - Wikipedia - Rayleigh distribution. - - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - If is negative. - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the scale (σ) of the distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Rayleigh distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (σ) of the distribution. Range: σ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (σ) of the distribution. Range: σ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized Error Distribution (SGED). - Implements the univariate SSkewed Generalized Error Distribution. For details about this - distribution, see - - Wikipedia - Generalized Error Distribution. - It includes Laplace, Normal and Student-t distributions. - This is the distribution with q=Inf. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution - with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Generates a sample from the Skew Generalized Error distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized Error distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized T-distribution. - Implements the univariate Skewed Generalized t-distribution. For details about this - distribution, see - - Wikipedia - Skewed generalized t-distribution. - The skewed generalized t-distribution contains many different distributions within it - as special cases based on the parameterization chosen. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution - with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Given a parameter set, returns the distribution that matches this parameterization. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - Null if no known distribution matches the parameterization, else the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the inverse cumulative density at . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Skew Generalized t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Stable distribution. - A random variable is said to be stable (or to have a stable distribution) if it has - the property that a linear combination of two independent copies of the variable has - the same distribution, up to location and scale parameters. - For details about this distribution, see - Wikipedia - Stable distribution. - - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. - - - - - Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - - - - - Gets the scale (c) of the distribution. Range: c > 0. - - - - - Gets the location (μ) of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets he entropy of the distribution. - - Always throws a not supported exception. - - - - Gets the skewness of the distribution. - - Throws a not supported exception of Alpha != 2. - - - - Gets the mode of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the median of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) - - - - Samples the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a random number from the distribution. - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Stable distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Continuous Univariate Student's T-distribution. - Implements the univariate Student t-distribution. For details about this - distribution, see - - Wikipedia - Student's t-distribution. - - We use a slightly generalized version (compared to - Wikipedia) of the Student t-distribution. Namely, one which also - parameterizes the location and scale. See the book "Bayesian Data - Analysis" by Gelman et al. for more details. - The density of the Student t-distribution p(x|mu,scale,dof) = - Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / - (Gamma(dof/2)*Sqrt(dof*pi*scale)). - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. This might involve heavy - computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 - scale 1.0 and degrees of freedom 1. - - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Gets the location (μ) of the Student t-distribution. - - - - - Gets the scale (σ) of the Student t-distribution. Range: σ > 0. - - - - - Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Student t-distribution. - - - - - Gets the variance of the Student t-distribution. - - - - - Gets the standard deviation of the Student t-distribution. - - - - - Gets the entropy of the Student t-distribution. - - - - - Gets the skewness of the Student t-distribution. - - - - - Gets the mode of the Student t-distribution. - - - - - Gets the median of the Student t-distribution. - - - - - Gets the minimum of the Student t-distribution. - - - - - Gets the maximum of the Student t-distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Samples student-t distributed random variables. - - The algorithm is method 2 in section 5, chapter 9 - in L. Devroye's "Non-Uniform Random Variate Generation" - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a random number from the standard student-t distribution. - - - - Generates a sample from the Student t-distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Student t-distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Student t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Student t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Triangular distribution. - For details, see Wikipedia - Triangular distribution. - - The distribution will use the by default. - Users can get/set the random number generator by using the property. - The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The random number generator which is used to draw random samples. - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets or sets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Triangular distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Generates a sample from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Initializes a new instance of the TruncatedPareto class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The random number generator which is used to draw random samples. - If or are non-positive or if T ≤ xm. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets the truncation (T) of the distribution. Range: T > 0. - - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Gets the mean of the truncated Pareto distribution. - - - - - Gets the variance of the truncated Pareto distribution. - - - - - Gets the standard deviation of the truncated Pareto distribution. - - - - - Gets the mode of the truncated Pareto distribution (not supported). - - - - - Gets the minimum of the truncated Pareto distribution. - - - - - Gets the maximum of the truncated Pareto distribution. - - - - - Gets the entropy of the truncated Pareto distribution (not supported). - - - - - Gets the skewness of the truncated Pareto distribution. - - - - - Gets the median of the truncated Pareto distribution. - - - - - Generates a sample from the truncated Pareto distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Continuous Univariate Weibull distribution. - For details about this distribution, see - Wikipedia - Weibull distribution. - - - The Weibull distribution is parametrized by a shape and scale parameter. - - - - - Reusable intermediate result 1 / (_scale ^ _shape) - - - By caching this parameter we can get slightly better numerics precision - in certain constellations without any additional computations. - - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Gets the shape (k) of the Weibull distribution. Range: k > 0. - - - - - Gets the scale (λ) of the Weibull distribution. Range: λ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Weibull distribution. - - - - - Gets the variance of the Weibull distribution. - - - - - Gets the standard deviation of the Weibull distribution. - - - - - Gets the entropy of the Weibull distribution. - - - - - Gets the skewness of the Weibull distribution. - - - - - Gets the mode of the Weibull distribution. - - - - - Gets the median of the Weibull distribution. - - - - - Gets the minimum of the Weibull distribution. - - - - - Gets the maximum of the Weibull distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Generates a sample from the Weibull distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Weibull distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos - - - - Returns a Weibull distribution. - - - - Generates a sample from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Multivariate Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution - is the conjugate prior for the precision (inverse covariance) matrix of the multivariate - normal distribution. - Wikipedia - Wishart distribution. - - - - - The degrees of freedom for the Wishart distribution. - - - - - The scale matrix for the Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The random number generator which is used to draw random samples. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Gets or sets the degrees of freedom (n) for the Wishart distribution. - - - - - Gets or sets the scale matrix (V) for the Wishart distribution. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - - - - Gets the variance of the distribution. - - The variance of the distribution. - - - - Evaluates the probability density function for the Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - A random number from this distribution. - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The cholesky decomposition to use. - a random number from the distribution. - - - - Discrete Univariate Zipf distribution. - Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact - that many types of data studied in the physical and social sciences can be approximated with - a Zipfian distribution, one of a family of related discrete power law probability distributions. - For details about this distribution, see - Wikipedia - Zipf distribution. - - - - - The s parameter of the distribution. - - - - - The n parameter of the distribution. - - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Gets or sets the s parameter of the distribution. - - - - - Gets or sets the n parameter of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The s parameter of the distribution. - The n parameter of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the Zipf distribution without doing parameter checking. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - a random number from the Zipf distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of zipf distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Integer number theory functions. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Find out whether the provided 32 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 64 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 32 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 64 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 32 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 64 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Raises 2 to the provided integer exponent (0 <= exponent < 31). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Raises 2 to the provided integer exponent (0 <= exponent < 63). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Evaluate the binary logarithm of an integer number. - - Two-step method using a De Bruijn-like sequence table lookup. - - - - Find the closest perfect power of two that is larger or equal to the provided - 32 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Find the closest perfect power of two that is larger or equal to the provided - 64 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's - algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the greatest common divisor (gcd) of two big integers. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two big integers. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Collection of functions equivalent to those provided by Microsoft Excel - but backed instead by Math.NET Numerics. - We do not recommend to use them except in an intermediate phase when - porting over solutions previously implemented in Excel. - - - - - An algorithm failed to converge. - - - - - An algorithm failed to converge due to a numerical breakdown. - - - - - An error occurred calling native provider function. - - - - - An error occurred calling native provider function. - - - - - Native provider was unable to allocate sufficient memory. - - - - - Native provider failed LU inversion do to a singular U matrix. - - - - - Compound Monthly Return or Geometric Return or Annualized Return - - - - - Average Gain or Gain Mean - This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) - and then dividing the total by the number of gain periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Average Loss or LossMean - This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) - and then dividing the total by the number of loss periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain - and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. - © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. - - - - - Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then - measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. - - http://www.offshore-library.com/kb/statistics.php - - - - This measure is similar to the loss standard deviation except the downside deviation - considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. - For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below - 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for - the losing periods, and then measure the variation between each losing return and the losing return average). - - - - - A measure of volatility in returns below the mean. It's similar to standard deviation, but it only - looks at periods where the investment return was less than average return. - - - - - Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing - period. Periods can be monthly or quarterly depending on the data frequency. - - - - - Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - The missing gradient is evaluated numerically (forward difference). - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - - Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" - The roots of the polynomial - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The polynomial. - The roots of the polynomial - - - - Find all roots of the Chebychev polynomial of the first kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) - - - - Find all roots of the Chebychev polynomial of the second kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) - - - - Least-Squares Curve Fitting Routines - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as [a, b] array, - where a is the intercept and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning its best fitting parameters as (a, r) tuple. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning a function y' for the best fitting polynomial. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning a function y' for the best fitting combination. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning its best fitting parameter p. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning its best fitting parameter p0 and p1. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning its best fitting parameter p0, p1 and p2. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning a function y' for the best fitting curve. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate a linearly spaced sample vector of the given length between the specified values (inclusive). - Equivalent to MATLAB linspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). - - - - - Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). - Equivalent to MATLAB logspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - - - - - Create a periodic wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic wave. - - The number of samples to generate. - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a Sine wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite Sine wave sequence. - - Samples per unit. - Frequency in samples per unit. - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic square wave, starting with the high phase. - - The number of samples to generate. - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create an infinite periodic square wave sequence, starting with the high phase. - - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create a periodic triangle wave, starting with the raise phase from the lowest sample. - - The number of samples to generate. - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. - - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create a periodic sawtooth wave, starting with the lowest sample. - - The number of samples to generate. - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. - - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an array with each field set to the same value. - - The number of samples to generate. - The value that each field should be set to. - - - - Create an infinite sequence where each element has the same value. - - The value that each element should be set to. - - - - Create a Heaviside Step sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. - - - - Create an infinite Heaviside Step sample sequence. - - The maximal reached peak. - Offset to the time axis. - - - - Create a Kronecker Delta impulse sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - The maximal reached peak. - Offset to the time axis, hence the sample index of the impulse. - - - - Create a periodic Kronecker Delta impulse sample vector. - - The number of samples to generate. - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Generate samples generated by the given computation. - - - - - Generate an infinite sequence generated by the given computation. - - - - - Generate a Fibonacci sequence, including zero as first value. - - - - - Generate an infinite Fibonacci sequence, including zero as first value. - - - - - Create random samples, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create an infinite random sample sequence, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create samples with independent amplitudes of standard distribution. - - - - - Create an infinite sample sequence with independent amplitudes of standard distribution. - - - - - Create samples with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Generate samples by sampling a function at samples from a probability distribution. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution. - - - - - Globalized String Handling Helpers - - - - - Tries to get a from the format provider, - returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format - provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Globalized Parsing: Tokenize a node by splitting it into several nodes. - - Node that contains the trimmed string to be tokenized. - List of keywords to tokenize by. - keywords to skip looking for (because they've already been handled). - - - - Globalized Parsing: Parse a double number - - First token of the number. - Culture Info. - The parsed double number using the given culture information. - - - - - Globalized Parsing: Parse a float number - - First token of the number. - Culture Info. - The parsed float number using the given culture information. - - - - - Calculates r^2, the square of the sample correlation coefficient between - the observed outcomes and the observed predictor values. - Not to be confused with R^2, the coefficient of determination, see . - - The modelled/predicted values - The observed/actual values - Squared Person product-momentum correlation coefficient. - - - - Calculates r, the sample correlation coefficient between the observed outcomes - and the observed predictor values. - - The modelled/predicted values - The observed/actual values - Person product-momentum correlation coefficient. - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The Standard Error of the regression - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The degrees of freedom by which the - number of samples is reduced for performing the Standard Error calculation - The Standard Error of the regression - - - - Calculates the R-Squared value, also known as coefficient of determination, - given some modelled and observed values. - - The values expected from the model. - The actual values obtained. - Coefficient of determination. - - - - Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). - - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed from the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Generate the frequencies corresponding to each index in frequency space. - The frequency space has a resolution of sampleRate/N. - Index 0 corresponds to the DC part, the following indices correspond to - the positive frequencies up to the Nyquist frequency (sampleRate/2), - followed by the negative frequencies wrapped around. - - Number of samples. - The sampling rate of the time-space data. - - - - Fourier Transform Convention - - - - - Inverse integrand exponent (forward: positive sign; inverse: negative sign). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling and common exponent (used in Maple). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] - - - - - Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] - - - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - - - Naive forward DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Hartley Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DHT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Hartley Transform Convention Options. - Corresponding time-space vector. - - - - Rescale FFT-the resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Rescale the iFFT-resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Naive generic DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Corresponding frequency-space vector. - - - - Hartley Transform Convention - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling. - - - - - Numerical Integration (Quadrature). - - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Numerical Contour Integration of a complex-valued function over a real variable,. - - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Analytic integration algorithm for smooth functions with no discontinuities - or derivative discontinuities and no poles inside the interval. - - - - - Maximum number of iterations, until the asked - maximum error is (likely to be) satisfied. - - - - - Approximate the integral by the double exponential transformation - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximate the integral by the double exponential transformation - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Compute the abscissa vector for a single level. - - The level to evaluate the abscissa vector for. - Abscissa Vector. - - - - Compute the weight vector for a single level. - - The level to evaluate the weight vector for. - Weight Vector. - - - - Precomputed abscissa vector per level. - - - - - Precomputed weight vector per level. - - - - - Getter for the order. - - - - - Getter that returns a clone of the array containing the Kronrod abscissas. - - - - - Getter that returns a clone of the array containing the Kronrod weights. - - - - - Getter that returns a clone of the array containing the Gauss weights. - - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth function to integrate - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth complex function to integrate, defined on the real axis. - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - - Initializes a new instance of the class. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - Gettter for the ith abscissa. - - Index of the ith abscissa. - The ith abscissa. - - - - Getter that returns a clone of the array containing the abscissas. - - - - - Getter for the ith weight. - - Index of the ith weight. - The ith weight. - - - - Getter that returns a clone of the array containing the weights. - - - - - Getter for the order. - - - - - Getter for the InvervalBegin. - - - - - Getter for the InvervalEnd. - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth function to integrate. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - Contains a method to compute the Gauss-Kronrod abscissas/weights. - - - - - Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - - - Computes the Gauss-Kronrod abscissas/weights and Gauss weights. - - Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. - Object containing the non-negative abscissas/weights, order. - - - - Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. - - - - - Return value and derivative of a Legendre series at given points. - - - - - Return value and derivative of a Legendre polynomial of order at given points. - - - - - Creates a Gauss-Kronrod point. - - - - - Getter for the GaussKronrodPoint. - - Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, and order. - - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Computes the Gauss-Legendre abscissas/weights. - See Pavel Holoborodko for a description of the algorithm. - - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. 1e-10 is usually fine. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Creates and maps a Gauss-Legendre point. - - - - - Getter for the GaussPoint. - - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Getter for the GaussPoint. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Contains the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - - Contains two GaussPoint. - - - - - Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. - - - Wikipedia - Trapezium Rule - - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, define don real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation algorithm for definite integrals by Simpson's rule. - - - - - Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Even number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Interpolation Factory. - - - - - Creates an interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Bulirsch Stoer rational interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a barycentric polynomial interpolation where the given sample points are equidistant. - - The sample points t, must be equidistant. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted - instead, which is more efficient. - - - - - Create a Neville polynomial interpolation based on arbitrary points. - If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. - Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a piecewise linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted - instead, which is more efficient. - - - - - Create piecewise log-linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted - instead, which is more efficient. - - - - - Create an piecewise natural cubic spline interpolation based on arbitrary points, - with zero secondary derivatives at the boundaries. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted - instead, which is more efficient. - - - - - Create an piecewise cubic Akima spline interpolation based on arbitrary points. - Akima splines are robust to outliers. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted - instead, which is more efficient. - - - - - Create a piecewise cubic Hermite spline interpolation based on arbitrary points - and their slopes/first derivative. - - The sample points t. - The sample point values x(t). - The slope at the sample points. Optimized for arrays. - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted - instead, which is more efficient. - - - - - Create a step-interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Barycentric Interpolation Algorithm. - - Supports neither differentiation nor integration. - - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - Barycentric weights (N), sorted ascendingly by x. - - - - Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. - - - - - Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. - - - - - Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. - - - - This algorithm supports neither differentiation nor integration. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Cubic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - third order spline coefficients (N) - - - - Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - - - - - Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - Akima splines are robust to outliers. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - - - - - Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, - and custom boundary/termination conditions. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - - - - - Create a natural cubic spline interpolation from a set of (x,y) value pairs - and zero second derivatives at the two boundaries, sorted ascendingly by x. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - - - - - Three-Point Differentiation Helper. - - Sample Points t. - Sample Values x(t). - Index of the point of the differentiation. - Index of the first sample. - Index of the second sample. - Index of the third sample. - The derivative approximation. - - - - Tridiagonal Solve Helper. - - The a-vector[n]. - The b-vector[n], will be modified by this function. - The c-vector[n]. - The d-vector[n], will be modified by this function. - The x-vector[n] - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Interpolation within the range of a discrete set of known data points. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Piece-wise Linear Interpolation. - - Supports both differentiation and integration. - - - Sample points (N+1), sorted ascending - Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients - Slopes (N) at the sample points (first order coefficients): N - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Piece-wise Log-Linear Interpolation - - This algorithm supports differentiation, not integration. - - - - Internal Spline Interpolation - - - - Sample points (N), sorted ascending - Natural logarithm of the sample values (N) at the corresponding points - - - - Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Lagrange Polynomial Interpolation using Neville's Algorithm. - - - - This algorithm supports differentiation, but doesn't support integration. - - - When working with equidistant or Chebyshev sample points it is - recommended to use the barycentric algorithms specialized for - these cases instead of this arbitrary Neville algorithm. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Quadratic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Left and right boundary conditions. - - - - - Natural Boundary (Zero second derivative). - - - - - Parabolically Terminated boundary. - - - - - Fixed first derivative at the boundary. - - - - - Fixed second derivative at the boundary. - - - - - A step function where the start of each segment is included, and the last segment is open-ended. - Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. - The domain of the function is all real numbers, such that y = 0 where x <. - - Supports both differentiation and integration. - - - Sample points (N), sorted ascending - Samples values (N) of each segment starting at the corresponding sample point. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t. - - - - - Wraps an interpolation with a transformation of the interpolated values. - - Neither differentiation nor integration is supported. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use, - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a double dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - Matrix V is encoded in the property EigenVectors in the way that: - - column corresponding to real eigenvalue represents real eigenvector, - - columns corresponding to the pair of complex conjugate eigenvalues - lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - double version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - double version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiply this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply this one by. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a float dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - float version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a float sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - float version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the Frobenius norm of this matrix. - The Frobenius norm of this matrix. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex value z1 - Complex value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex32 value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex32 value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex32 value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex32 dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex32 value z1 - Complex32 value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - Supported data types are double, single, , and . - - - - Gets the lower triangular form of the Cholesky matrix. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - Supported data types are double, single, , and . - - - - Gets or sets a value indicating whether matrix is symmetric or not - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Gets or sets the eigen values (λ) of matrix in ascending value. - - - - - Gets or sets eigenvectors. - - - - - Gets or sets the block diagonal eigenvalue matrix. - - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - Supported data types are double, single, , and . - - - - Classes that solves a system of linear equations, AX = B. - - Supported data types are double, single, , and . - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, Ax = b - - The right hand side vector, b. - The left hand side Vector, x. - - - - Solves a system of linear equations, Ax = b. - - The right hand side vector, b. - The left hand side Matrix>, x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - Supported data types are double, single, , and . - - - - Gets the lower triangular factor. - - - - - Gets the upper triangular factor. - - - - - Gets the permutation applied to LU factorization. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - The type of QR factorization go perform. - - - - - Compute the full QR factorization of a matrix. - - - - - Compute the thin QR factorization of a matrix. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - Supported data types are double, single, , and . - - - - Gets or sets orthogonal Q matrix - - - - - Gets the upper triangular factor R. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - Supported data types are double, single, , and . - - - Indicating whether U and VT matrices have been computed during SVD factorization. - - - - Gets the singular values (Σ) of matrix in ascending value. - - - - - Gets the left singular vectors (U - m-by-m unitary matrix) - - - - - Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) - - - - - Returns the singular values as a diagonal . - - The singular values as a diagonal . - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - Supported data types are double, single, , and . - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - - - - The value of 1.0. - - - - - The value of 0.0. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar denominator to use. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar numerator to use. - The matrix to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. - - The exponent matrix to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Adds a scalar to each element of the matrix. - - The scalar to add. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds a scalar to each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix. - - The scalar to subtract. - A new matrix containing the subtraction of this matrix and the scalar. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts each element of the matrix from a scalar. - - The scalar to subtract from. - A new matrix containing the subtraction of the scalar and this matrix. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of this matrix with a scalar. - - The scalar to multiply with. - The result of the multiplication. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides each element of this matrix with a scalar. - - The scalar to divide with. - The result of the division. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides a scalar by each element of the matrix. - - The scalar to divide. - The result of the division. - - - - Divides a scalar by each element of the matrix and places results into the result matrix. - - The scalar to divide. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.ColumnCount != rightSide.Count. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.RowCount. - If this.ColumnCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ). - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.Rows. - If the result matrix's dimensions are not the this.Rows x other.Columns. - - - - Multiplies this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.Rows. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with the conjugate transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the conjugate transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Raises this square matrix to a positive integer exponent and places the results into the result matrix. - - The positive integer exponent to raise the matrix to. - The result of the power. - - - - Multiplies this square matrix with another matrix and returns the result. - - The positive integer exponent to raise the matrix to. - - - - Negate each element of this matrix. - - A matrix containing the negated values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - if the result matrix's dimensions are not the same as this matrix. - - - - Complex conjugate each element of this matrix. - - A matrix containing the conjugated values. - - - - Complex conjugate each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - if the result matrix's dimensions are not the same as this matrix. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Pointwise multiplies this matrix with another matrix. - - The matrix to pointwise multiply with this one. - If this matrix and are not the same size. - A new matrix that is the pointwise multiplication of this matrix and . - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise divide this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - A new matrix that is the pointwise division of this matrix and . - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise modulus. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise remainder. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Helper function to apply a unary function to a matrix. The function - f modifies the matrix given to it in place. Before its - called, a copy of the 'this' matrix is first created, then passed to - f. The copy is then returned as the result - - Function which takes a matrix, modifies it in place and returns void - New instance of matrix which is the result - - - - Helper function to apply a unary function which modifies a matrix - in place. - - Function which takes a matrix, modifies it in place and returns void - The matrix to be passed to f and where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two matrices - and modifies the latter in place. A copy of the "this" matrix is - first made and then passed to f together with the other matrix. The - copy is then returned as the result - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The resulting matrix - If this matrix and are not the same dimension. - - - - Helper function to apply a binary function which takes two matrices - and modifies the second one in place - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The matrix to store the result. - The resulting matrix - If this matrix and are not the same dimension. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - - - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - The other matrix 'y' - The matrix with the result and 'x' - - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Calculates the rank of the matrix. - - effective numerical rank, obtained from SVD - - - - Calculates the nullity of the matrix. - - effective numerical nullity, obtained from SVD - - - Calculates the condition number of this matrix. - The condition number of the matrix. - The condition number is calculated using singular value decomposition. - - - Computes the determinant of this matrix. - The determinant of this matrix. - - - - Computes an orthonormal basis for the null space of this matrix, - also known as the kernel of the corresponding matrix transformation. - - - - - Computes an orthonormal basis for the column space of this matrix, - also known as the range or image of the corresponding matrix transformation. - - - - Computes the inverse of this matrix. - The inverse of this matrix. - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. - In a later release, it will be replaced with a sparse implementation. - - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns a string that describes the type, dimensions and shape of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes this matrix. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Matrix class. - - - - - Gets the raw matrix data storage. - - - - - Gets the number of columns. - - The number of columns. - - - - Gets the number of rows. - - The number of rows. - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - - - - Sets the value of the given element without range checking. - - - The row of the element. - - - The column of the element. - - - The value to set the element to. - - - - - Sets all values to zero. - - - - - Sets all values of a row to zero. - - - - - Sets all values of a column to zero. - - - - - Sets all values for all of the chosen rows to zero. - - - - - Sets all values for all of the chosen columns to zero. - - - - - Sets all values of a sub-matrix to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Creates a clone of this instance. - - - A clone of the instance. - - - - - Copies the elements of this matrix to the given matrix. - - - The matrix to copy values into. - - - If target is . - - - If this and the target matrix do not have the same dimensions.. - - - - - Copies a row into an Vector. - - The row to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of rows. - - - - Copies a row into to the given Vector. - - The row to copy. - The Vector to copy the row into. - If the result vector is . - If is negative, - or greater than or equal to the number of rows. - If this.Columns != result.Count. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of rows. - is negative, - or greater than or equal to the number of columns. - (columnIndex + length) >= Columns. - If is not positive. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Copies a column into a new Vector>. - - The column to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of columns. - - - - Copies a column into to the given Vector. - - The column to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If this.Rows != result.Count. - - - - Copies the requested column elements into a new Vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of columns. - is negative, - or greater than or equal to the number of rows. - (rowIndex + length) >= Rows. - - If is not positive. - - - - Copies the requested column elements into the given vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Returns the elements of the diagonal in a Vector. - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a new matrix and inserts the given column at the given index. - - The index of where to insert the column. - The column to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of columns. - If the size of != the number of rows. - - - - Creates a new matrix with the given column removed. - - The index of the column to remove. - A new matrix without the chosen column. - If is < zero or >= the number of columns. - - - - Copies the values of the given Vector to the specified column. - - The column to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given Vector to the specified sub-column. - - The column to copy the values to. - The row to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given array to the specified column. - - The column to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - If the size of does not - equal the number of rows of this Matrix. - - - - Creates a new matrix and inserts the given row at the given index. - - The index of where to insert the row. - The row to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of rows. - If the size of != the number of columns. - - - - Creates a new matrix with the given row removed. - - The index of the row to remove. - A new matrix without the chosen row. - If is < zero or >= the number of rows. - - - - Copies the values of the given Vector to the specified row. - - The row to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given Vector to the specified sub-row. - - The row to copy the values to. - The column to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given array to the specified row. - - The row to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The column to start copying to. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The number of rows to copy. Must be positive. - The column to start copying to. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The row of the sub-matrix to start copying from. - The number of rows to copy. Must be positive. - The column to start copying to. - The column of the sub-matrix to start copying from. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of the given Vector to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Returns the transpose of this matrix. - - The transpose of this matrix. - - - - Puts the transpose of this matrix into the result matrix. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - - - - Concatenates this matrix with the given matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Concatenates this matrix with the given matrix and places the result into the result matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, - where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. - The values of off the off diagonal matrices/blocks are set to zero. - - The lower, right matrix. - If lower is . - the combined matrix - - - - - - Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. - - The lower, right matrix. - The combined matrix - If lower is . - If the result matrix is . - If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). - - - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Returns this matrix as a multidimensional array. - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - - A multidimensional containing the values of this matrix. - - - - Returns the matrix's elements as an array with the data laid out column by column (column major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the matrix's elements as an array with the data laid row by row (row major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns this matrix as array of row arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns this matrix as array of column arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the matrix will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowArrays instead if you always need an independent array. - - - - - Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnArrays instead if you always need an independent array. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. - - The column to start enumerating over. - The number of columns to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. - - The column to start enumerating over. - The number of columns to enumerating over. - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. - - The row to start enumerating over. - The number of rows to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. - - The row to start enumerating over. - The number of rows to enumerating over. - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Applies a function to each value of this matrix and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value with its result. - The row and column indices of each value (zero-based) are passed as first arguments to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each row. - - - - - For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each column. - - - - - Applies a function f to each row vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Applies a function f to each column vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Reduces all row vectors by applying a function between two of them, until only a single vector is left. - - - - - Reduces all column vectors by applying a function between two of them, until only a single vector is left. - - - - - Applies a function to each value pair of two matrices and replaces the value in the result vector. - - - - - Applies a function to each value pair of two matrices and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two matrices and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two matrices of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to add. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to add. - The right matrix to add. - The result of the addition. - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts a scalar from each element of a matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to subtract. - The scalar value to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts each element of a matrix from a scalar. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Divides a scalar with a matrix. - - The scalar to divide. - The matrix. - The result of the division. - If is . - - - - Divides a matrix with a scalar. - - The matrix to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of the matrix of the given divisor. - - The matrix whose elements we want to compute the modulus of. - The divisor to use. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the matrix. - - The dividend we want to compute the modulus of. - The matrix whose elements we want to use as divisor. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two matrices. - - The matrix whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a matrix pointwise - - The input matrix - - - - - Computes the exponential of a matrix pointwise - - The input matrix - - - - - Computes the log of a matrix pointwise - - The input matrix - - - - - Computes the log10 of a matrix pointwise - - The input matrix - - - - - Computes the sin of a matrix pointwise - - The input matrix - - - - - Computes the cos of a matrix pointwise - - The input matrix - - - - - Computes the tan of a matrix pointwise - - The input matrix - - - - - Computes the asin of a matrix pointwise - - The input matrix - - - - - Computes the acos of a matrix pointwise - - The input matrix - - - - - Computes the atan of a matrix pointwise - - The input matrix - - - - - Computes the sinh of a matrix pointwise - - The input matrix - - - - - Computes the cosh of a matrix pointwise - - The input matrix - - - - - Computes the tanh of a matrix pointwise - - The input matrix - - - - - Computes the absolute value of a matrix pointwise - - The input matrix - - - - - Computes the floor of a matrix pointwise - - The input matrix - - - - - Computes the ceiling of a matrix pointwise - - The input matrix - - - - - Computes the rounded value of a matrix pointwise - - The input matrix - - - - - Computes the Cholesky decomposition for a matrix. - - The Cholesky decomposition object. - - - - Computes the LU decomposition for a matrix. - - The LU decomposition object. - - - - Computes the QR decomposition for a matrix. - - The type of QR factorization to perform. - The QR decomposition object. - - - - Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. - - The QR decomposition object. - - - - Computes the SVD decomposition for a matrix. - - Compute the singular U and VT vectors or not. - The SVD decomposition object. - - - - Computes the EVD decomposition for a matrix. - - The EVD decomposition object. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The result matrix X. - - - - Converts a matrix to single precision. - - - - - Converts a matrix to double precision. - - - - - Converts a matrix to single precision complex numbers. - - - - - Converts a matrix to double precision complex numbers. - - - - - Gets a single precision complex matrix with the real parts from the given matrix. - - - - - Gets a double precision complex matrix with the real parts from the given matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Existing data may not be all zeros, so clearing may be necessary - if not all of it will be overwritten anyway. - - - - - If existing data is assumed to be all zeros already, - clearing it may be skipped if applicable. - - - - - Allow skipping zero entries (without enforcing skipping them). - When enumerating sparse matrices this can significantly speed up operations. - - - - - Force applying the operation to all fields even if they are zero. - - - - - It is not known yet whether a matrix is symmetric or not. - - - - - A matrix is symmetric - - - - - A matrix is Hermitian (conjugate symmetric). - - - - - A matrix is not symmetric - - - - - Defines an that uses a cancellation token as stop criterion. - - - - - Initializes a new instance of the class. - - - - - Initializes a new instance of the class. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Stop criterion that delegates the status determination to a delegate. - - - - - Create a new instance of this criterion with a custom implementation. - - Custom implementation with the same signature and semantics as the DetermineStatus method. - - - - Determines the status of the iterative calculation by delegating it to the provided delegate. - Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - - - - Clones this criterion and its settings. - - - - - Monitors an iterative calculation for signs of divergence. - - - - - The maximum relative increase the residual may experience without triggering a divergence warning. - - - - - The number of iterations over which a residual increase should be tracked before issuing a divergence warning. - - - - - The status of the calculation - - - - - The array that holds the tracking information. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified maximum - relative increase and the specified minimum number of tracking iterations. - - The maximum relative increase that the residual may experience before a divergence warning is issued. - The minimum number of iterations over which the residual must grow before a divergence warning is issued. - - - - Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. - - Thrown if the Maximum is set to zero or below. - - - - Gets or sets the minimum number of iterations over which the residual must grow before - issuing a divergence warning. - - Thrown if the value is set to less than one. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Detect if solution is diverging - - true if diverging, otherwise false - - - - Gets required history Length - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Defines an that monitors residuals for NaN's. - - - - - The status of the calculation - - - - - The iteration number of the last iteration. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - The base interface for classes that provide stop criteria for iterative calculations. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current IIterationStopCriterion. Status is set to Status field of current object. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - is not a legal value. Status should be set in implementation. - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - To implementers: Invoking this method should not clear the user defined - property values, only the state that is used to track the progress of the - calculation. - - - - Defines the interface for classes that solve the matrix equation Ax = b in - an iterative manner. - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Defines the interface for objects that can create an iterative solver with - specific settings. This interface is used to pass iterative solver creation - setup information around. - - - - - Gets the type of the solver that will be created by this setup object. - - - - - Gets type of preconditioner, if any, that will be created by this setup object. - - - - - Creates the iterative solver to be used. - - - - - Creates the preconditioner to be used by default (can be overwritten). - - - - - Gets the relative speed of the solver. - - Returns a value between 0 and 1, inclusive. - - - - Gets the relative reliability of the solver. - - Returns a value between 0 and 1 inclusive. - - - - The base interface for preconditioner classes. - - - - Preconditioners are used by iterative solvers to improve the convergence - speed of the solving process. Increase in convergence speed - is related to the number of iterations necessary to get a converged solution. - So while in general the use of a preconditioner means that the iterative - solver will perform fewer iterations it does not guarantee that the actual - solution time decreases given that some preconditioners can be expensive to - setup and run. - - - Note that in general changes to the matrix will invalidate the preconditioner - if the changes occur after creating the preconditioner. - - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix on which the preconditioner is based. - - - - Approximates the solution to the matrix equation Mx = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Defines an that monitors the numbers of iteration - steps as stop criterion. - - - - - The default value for the maximum number of iterations the process is allowed - to perform. - - - - - The maximum number of iterations the calculation is allowed to perform. - - - - - The status of the calculation - - - - - Initializes a new instance of the class with the default maximum - number of iterations. - - - - - Initializes a new instance of the class with the specified maximum - number of iterations. - - The maximum number of iterations the calculation is allowed to perform. - - - - Gets or sets the maximum number of iterations the calculation is allowed to perform. - - Thrown if the Maximum is set to a negative value. - - - - Returns the maximum number of iterations to the default. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Iterative Calculation Status - - - - - An iterator that is used to check if an iterative calculation should continue or stop. - - - - - The collection that holds all the stop criteria and the flag indicating if they should be added - to the child iterators. - - - - - The status of the iterator. - - - - - Initializes a new instance of the class with the default stop criteria. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Gets the current calculation status. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual iterators may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Indicates to the iterator that the iterative process has been cancelled. - - - Does not reset the stop-criteria. - - - - - Resets the to the pre-calculation state. - - - - - Creates a deep clone of the current iterator. - - The deep clone of the current iterator. - - - - Defines an that monitors residuals as stop criterion. - - - - - The maximum value for the residual below which the calculation is considered converged. - - - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - The status of the calculation - - - - - The number of iterations since the residuals got below the maximum. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified - maximum residual and minimum number of iterations. - - - The maximum value for the residual below which the calculation is considered converged. - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - Gets or sets the maximum value for the residual below which the calculation is considered - converged. - - Thrown if the Maximum is set to a negative value. - - - - Gets or sets the minimum number of iterations for which the residual has to be - below the maximum before the calculation is considered converged. - - Thrown if the BelowMaximumFor is set to a value less than 1. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Loads the available objects from the specified assembly. - - The assembly which will be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The type in the assembly which should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The of the assembly that should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - - - - A unit preconditioner. This preconditioner does not actually do anything - it is only used when running an without - a preconditioner. - - - - - The coefficient matrix on which this preconditioner operates. - Is used to check dimensions on the different vectors that are processed. - - - - - Initializes the preconditioner and loads the internal data structures. - - - The matrix upon which the preconditioner is based. - - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - If and do not have the same size. - - - - or - - - - If the size of is different the number of rows of the coefficient matrix. - - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Evaluate the row and column at a specific data index. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - - The array containing the row indices of the existing rows. Element "i" of the array gives the index of the - element in the array that is first non-zero element in a row "i". - The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always - given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. - - - - - An array containing the column indices of the non-zero values. Element "j" of the array - is the number of the column in matrix that contains the j-th value in the array. - - - - - Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values - array using the row-major storage mapping described in a compressed sparse row (CSR) format. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Delete value from internal storage - - Index of value in nonZeroValues array - Row number of matrix - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Find item Index in nonZeroValues array - - Matrix row index - Matrix column index - Item index - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Array that contains the indices of the non-zero values. - - - - - Array that contains the non-zero elements of the vector. - - - - - Gets the number of non-zero elements in the vector. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the vector storage format is dense. - - - - - Gets or sets the value at the given index, with range checking. - - - The index of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - The index of the element. - The requested element. - Not range-checked. - - - - Sets the element without range checking. - - The index of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - - Defines the generic class for Vector classes. - - Supported data types are double, single, , and . - - - - The zero value for type T. - - - - - The value of 1.0 for type T. - - - - - Negates vector and save result to - - Target vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar denominator to use. - The vector to store the result of the division. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar numerator to use. - The vector to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Adds a scalar to each element of the vector. - - The scalar to add. - A copy of the vector with the scalar added. - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - If this vector and are not the same size. - - - - Adds another vector to this vector. - - The vector to add to this one. - A new vector containing the sum of both vectors. - If this vector and are not the same size. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Subtracts a scalar from each element of the vector. - - The scalar to subtract. - A new vector containing the subtraction of this vector and the scalar. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Subtracts each element of the vector from a scalar. - - The scalar to subtract from. - A new vector containing the subtraction of the scalar and this vector. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Returns a negated vector. - - The negated vector. - Added as an alternative to the unary negation operator. - - - - Negates vector and save result to - - Target vector - - - - Subtracts another vector from this vector. - - The vector to subtract from this one. - A new vector containing the subtraction of the two vectors. - If this vector and are not the same size. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Return vector with complex conjugate values of the source vector - - Conjugated vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector. - - The scalar to multiply. - A new vector that is the multiplication of the vector and the scalar. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - If this vector and are not the same size. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - If is not of the same size. - - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - If is not of the same size. - If is . - - - - - Divides each element of the vector by a scalar. - - The scalar to divide with. - A new vector that is the division of the vector and the scalar. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar to divide with. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Divides a scalar by each element of the vector. - - The scalar to divide. - A new vector that is the division of the vector and the scalar. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector. - - The vector to pointwise multiply with this one. - A new vector which is the pointwise multiplication of the two vectors. - If this vector and are not the same size. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector. - - The pointwise denominator vector to use. - A new vector which is the pointwise division of the two vectors. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise division. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The matrix to store the result into. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - The vector to store the result into. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise modulus. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise remainder. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Helper function to apply a unary function to a vector. The function - f modifies the vector given to it in place. Before its - called, a copy of the 'this' vector with the same dimension is - first created, then passed to f. The copy is returned as the result - - Function which takes a vector, modifies it in place and returns void - New instance of vector which is the result - - - - Helper function to apply a unary function which modifies a vector - in place. - - Function which takes a vector, modifies it in place and returns void - The vector where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes a scalar and - a vector and modifies the latter in place. A copy of the "this" - vector is therefore first made and then passed to f together with - the scalar argument. The copy is then returned as the result - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The resulting vector - - - - Helper function to apply a binary function which takes a scalar and - a vector, modifies the latter in place and returns void. - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The vector where the result will be placed - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the latter in place. A copy of the "this" vector is - first made and then passed to f together with the other vector. The - copy is then returned as the result - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the second one in place - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - The vector to store the result - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. - - The other vector - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = (sum(abs(this[i])^p))^(1/p) - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - The p value. - This vector normalized to a unit vector with respect to the p-norm. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the value of maximum element. - - The value of maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the value of the minimum element. - - The value of the minimum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Computes the sum of the absolute value of the vector's elements. - - The sum of the absolute value of the vector's elements. - - - - Indicates whether the current object is equal to another object of the same type. - - An object to compare with this object. - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns an enumerator that iterates through the collection. - - - A that can be used to iterate through the collection. - - - - - Returns an enumerator that iterates through a collection. - - - An object that can be used to iterate through the collection. - - - - - Returns a string that describes the type, dimensions and shape of this vector. - - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Character to use to print if there is not enough space to print all entries. Typical value: "..". - Character to use to separate two columns on a line. Typical value: " " (2 spaces). - Character to use to separate two rows/lines. Typical value: Environment.NewLine. - Function to provide a string for any given entry value. - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that represents the content of this vector, column by column. - - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector, column by column and with a type header. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Vector class. - - - - - Gets the raw vector data storage. - - - - - Gets the length or number of dimensions of this vector. - - - - Gets or sets the value at the given . - The index of the value to get or set. - The value of the vector at the given . - If is negative or - greater than the size of the vector. - - - Gets the value at the given without range checking.. - The index of the value to get or set. - The value of the vector at the given . - - - Sets the at the given without range checking.. - The index of the value to get or set. - The value to set. - - - - Resets all values to zero. - - - - - Sets all values of a subvector to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Returns a deep-copy clone of the vector. - - A deep-copy clone of the vector. - - - - Set the values of this vector to the given values. - - The array containing the values to use. - If is . - If is not the same size as this vector. - - - - Copies the values of this vector into the target vector. - - The vector to copy elements into. - If is . - If is not the same size as this vector. - - - - Creates a vector containing specified elements. - - The first element to begin copying from. - The number of elements to copy. - A vector containing a copy of the specified elements. - If is not positive or - greater than or equal to the size of the vector. - If + is greater than or equal to the size of the vector. - - If is not positive. - - - - Copies the values of a given vector into a region in this vector. - - The field to start copying to - The number of fields to copy. Must be positive. - The sub-vector to copy from. - If is - - - - Copies the requested elements from this vector to another. - - The vector to copy the elements to. - The element to start copying from. - The element to start copying to. - The number of elements to copy. - - - - Returns the data contained in the vector as an array. - The returned array will be independent from this vector. - A new memory block will be allocated for the array. - - The vector's data as an array. - - - - Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the vector will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Create a matrix based on this vector in column form (one single column). - - - This vector as a column matrix. - - - - - Create a matrix based on this vector in row form (one single row). - - - This vector as a row matrix. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Applies a function to each value of this vector and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value with its result. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value pair of two vectors and replaces the value in the result vector. - - - - - Applies a function to each value pair of two vectors and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two vectors and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two vectors of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Vector containing the same values of . - - This method is included for completeness. - The vector to get the values from. - A vector containing the same values as . - If is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Adds a scalar to each element of a vector. - - The vector to add to. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of a vector. - - The scalar value to add. - The vector to add to. - The result of the addition. - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of a vector. - - The vector to subtract from. - The scalar value to subtract. - The result of the subtraction. - If is . - - - - Subtracts each element of a vector from a scalar. - - The scalar value to subtract from. - The vector to subtract. - The result of the subtraction. - If is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a scalar with a vector. - - The scalar to divide. - The vector. - The result of the division. - If is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Pointwise divides two Vectors. - - The vector to divide. - The other vector. - The result of the division. - If and are not the same size. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the vector. - - The dividend we want to compute the remainder of. - The vector whose elements we want to use as divisor. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two vectors. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a vector pointwise - - The input vector - - - - - Computes the exponential of a vector pointwise - - The input vector - - - - - Computes the log of a vector pointwise - - The input vector - - - - - Computes the log10 of a vector pointwise - - The input vector - - - - - Computes the sin of a vector pointwise - - The input vector - - - - - Computes the cos of a vector pointwise - - The input vector - - - - - Computes the tan of a vector pointwise - - The input vector - - - - - Computes the asin of a vector pointwise - - The input vector - - - - - Computes the acos of a vector pointwise - - The input vector - - - - - Computes the atan of a vector pointwise - - The input vector - - - - - Computes the sinh of a vector pointwise - - The input vector - - - - - Computes the cosh of a vector pointwise - - The input vector - - - - - Computes the tanh of a vector pointwise - - The input vector - - - - - Computes the absolute value of a vector pointwise - - The input vector - - - - - Computes the floor of a vector pointwise - - The input vector - - - - - Computes the ceiling of a vector pointwise - - The input vector - - - - - Computes the rounded value of a vector pointwise - - The input vector - - - - - Converts a vector to single precision. - - - - - Converts a vector to double precision. - - - - - Converts a vector to single precision complex numbers. - - - - - Converts a vector to double precision complex numbers. - - - - - Gets a single precision complex vector with the real parts from the given vector. - - - - - Gets a double precision complex vector with the real parts from the given vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response vector Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response matrix Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor-Response samples as tuples - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor-Response samples as tuples - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response matrix Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Weighted Linear Regression using normal equations. - - List of sample vectors (predictor) together with their response. - List of weights, one for each sample. - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Locally-Weighted Linear Regression using normal equations. - - - - - Locally-Weighted Linear Regression using normal equations. - - - - - First Order AB method(same as Forward Euler) - - Initial value - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Second Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Third Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Fourth Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - ODE Solver Algorithms - - - - - Second Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Second Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems - http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz - - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The lower bound - The upper bound - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems - - - - - Creates BFGS minimizer - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - - Creates a base class for BFGS minimization - - - - - Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima - See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm - Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp - - - - - Finds a minimum of a function by the BFGS quasi-Newton method - This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian - - An initial guess - Evaluates the function at a point - Evaluates the gradient of the function at a point - The minimum found - - - - Objective function with a frozen evaluation that must not be changed from the outside. - - - - Create a new unevaluated and independent copy of this objective function - - - - Objective function with a mutable evaluation. - - - - Create a new independent copy of this objective function, evaluated at the same point. - - - - Get the y-values of the observations. - - - - - Get the values of the weights for the observations. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the values of the parameters. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector. G = J'(y - f(x; p)) - - - - - Get the approximated Hessian matrix. H = J'J - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Get the degree of freedom. - - - - - The scale factor for initial mu - - - - - Non-linear least square fitting by the Levenberg-Marduardt algorithm. - - The objective function, including model, observations, and parameter bounds. - The initial guess values. - The initial damping parameter of mu. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for L2 norm of the residuals. - The max iterations. - The result of the Levenberg-Marquardt minimization - - - - Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm - - - - - - Creates L-BFGS minimizer - - Numbers of gradients and steps to store. - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe - Conditions are - i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) - ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) - where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. - - Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - references: - http://en.wikipedia.org/wiki/Wolfe_conditions - http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - - - Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - - - - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - The upper bound - - - - Creates a base class for minimization - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. - Called fminsearch() in Matlab. A description of the algorithm can be found at - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - or - https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method - - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Evaluate the objective function at each vertex to create a corresponding - list of error values for each vertex - - - - - - - - Check whether the points in the error profile have so little range that we - consider ourselves to have converged - - - - - - - - - Examine all error values to determine the ErrorProfile - - - - - - - Construct an initial simplex, given starting guesses for the constants, and - initial step sizes for each dimension - - - - - - - Test a scaling operation of the high point, and replace it if it is an improvement - - - - - - - - - - - Contract the simplex uniformly around the lowest point - - - - - - - - - Compute the centroid of all points except the worst - - - - - - - - The value of the constant - - - - - Returns the best fit parameters. - - - - - Returns the standard errors of the corresponding parameters - - - - - Returns the y-values of the fitted model that correspond to the independent values. - - - - - Returns the covariance matrix at minimizing point. - - - - - Returns the correlation matrix at minimizing point. - - - - - The stopping threshold for the function value or L2 norm of the residuals. - - - - - The stopping threshold for L2 norm of the change of the parameters. - - - - - The stopping threshold for infinity norm of the gradient. - - - - - The maximum number of iterations. - - - - - The lower bound of the parameters. - - - - - The upper bound of the parameters. - - - - - The scale factors for the parameters. - - - - - Objective function where neither Gradient nor Hessian is available. - - - - - Objective function where the Gradient is available. Greedy evaluation. - - - - - Objective function where the Gradient is available. Lazy evaluation. - - - - - Objective function where the Hessian is available. Greedy evaluation. - - - - - Objective function where the Hessian is available. Lazy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Greedy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Lazy evaluation. - - - - - Objective function where neither first nor second derivative is available. - - - - - Objective function where the first derivative is available. - - - - - Objective function where the first and second derivatives are available. - - - - - objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective function with a user supplied jacobian for nonlinear least squares regression. - - - - - Objective function for nonlinear least squares regression. - The numerical jacobian with accuracy order is used. - - - - - Adapts an objective function with only value implemented - to provide a gradient as well. Gradient calculation is - done using the finite difference method, specifically - forward differences. - - For each gradient computed, the algorithm requires an - additional number of function evaluations equal to the - functions's number of input parameters. - - - - - Set or get the values of the independent variable. - - - - - Set or get the values of the observations. - - - - - Set or get the values of the weights for the observations. - - - - - Get whether parameters are fixed or free. - - - - - Get the number of observations. - - - - - Get the number of unknown parameters. - - - - - Get the degree of freedom - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Set or get the values of the parameters. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector of x and p. - - - - - Get the Hessian matrix of x and p, J'WJ - - - - - Set observed data to fit. - - - - - Set parameters and bounds. - - The initial values of parameters. - The list to the parameters fix or free. - - - - Non-linear least square fitting by the trust region dogleg algorithm. - - - - - The trust region subproblem. - - - - - The stopping threshold for the trust region radius. - - - - - Non-linear least square fitting by the trust-region algorithm. - - The objective model, including function, jacobian, observations, and parameter bounds. - The subproblem - The initial guess values. - The stopping threshold for L2 norm of the residuals. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for trust region radius - The max iterations. - - - - - Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. - - - - - Class to represent a permutation for a subset of the natural numbers. - - - - - Entry _indices[i] represents the location to which i is permuted to. - - - - - Initializes a new instance of the Permutation class. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - - - - Gets the number of elements this permutation is over. - - - - - Computes where permutes too. - - The index to permute from. - The index which is permuted to. - - - - Computes the inverse of the permutation. - - The inverse of the permutation. - - - - Construct an array from a sequence of inversions. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - The set of inversions to construct the permutation from. - A permutation generated from a sequence of inversions. - - - - Construct a sequence of inversions from the permutation. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - A sequence of inversions. - - - - Checks whether the array represents a proper permutation. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - True if represents a proper permutation, false otherwise. - - - - A single-variable polynomial with real-valued coefficients and non-negative exponents. - - - - - The coefficients of the polynomial in a - - - - - Only needed for the ToString method - - - - - Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. - The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. - - - - - Create a zero-polynomial with a coefficient array of the given length. - An array of length N can support polynomials of a degree of at most N-1. - - Length of the coefficient array - - - - Create a zero-polynomial - - - - - Create a constant polynomial. - Example: 3.0 -> "p : x -> 3.0" - - The coefficient of the "x^0" monomial. - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as array - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as enumerable - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k - - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Calculates the complex roots of the Polynomial by eigenvalue decomposition - - a vector of complex numbers with the roots - - - - Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. - - Eigenvalue matrix A - This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix - - - - Addition of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a polynomial and a scalar. - - - - - Subtraction of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a scalar from a polynomial. - - - - - Addition of a polynomial from a scalar. - - - - - Negation of a polynomial. - - - - - Multiplies a polynomial by a polynomial (convolution) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Scales a polynomial by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Scales a polynomial by division by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r - - Left polynomial - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Point-wise division of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Point-wise multiplication of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Division of two polynomials returning the quotient-with-remainder of the two polynomials given - - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Addition of two Polynomials (piecewise) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Scalar value - Polynomial - Resulting Polynomial - - - - Subtraction of two polynomial. - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Subtracts a scalar from a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - Subtracts a polynomial from a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Negates a polynomial. - - Polynomial - Resulting Polynomial - - - - Multiplies a polynomial by a polynomial (convolution). - - Left polynomial - Right polynomial - resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Polynomial - Scalar value - Resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Divides a polynomial by scalar value. - - Polynomial - Scalar value - Resulting Polynomial - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Utilities for working with floating point numbers. - - - - Useful links: - - - http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic - - - http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon - - - - - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The relative accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Checks if a given double values is finite, i.e. neither NaN nor inifnity - - The value to be checked fo finitenes. - - - - The number of binary digits used to represent the binary number for a double precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - The number of binary digits used to represent the binary number for a single precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - On a standard machine this is equivalent to `DoublePrecision`. - - - - - Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - On a standard machine this is equivalent to `PositiveDoublePrecision`. - - - - - The number of significant decimal places of double-precision floating numbers (64 bit). - - - - - The number of significant decimal places of single-precision floating numbers (32 bit). - - - - - Value representing 10 * 2^(-53) = 1.11022302462516E-15 - - - - - Value representing 10 * 2^(-24) = 5.96046447753906E-07 - - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. - - The value. - The value of the number. - - - - Returns a 'directional' long value. This is a long value which acts the same as a double, - e.g. a negative double value will return a negative double value starting at 0 and going - more negative as the double value gets more negative. - - The input double value. - A long value which is roughly the equivalent of the double value. - - - - Returns a 'directional' int value. This is a int value which acts the same as a float, - e.g. a negative float value will return a negative int value starting at 0 and going - more negative as the float value gets more negative. - - The input float value. - An int value which is roughly the equivalent of the double value. - - - - Increments a floating point number to the next bigger number representable by the data type. - - The value which needs to be incremented. - How many times the number should be incremented. - - The incrementation step length depends on the provided value. - Increment(double.MaxValue) will return positive infinity. - - The next larger floating point value. - - - - Decrements a floating point number to the next smaller number representable by the data type. - - The value which should be decremented. - How many times the number should be decremented. - - The decrementation step length depends on the provided value. - Decrement(double.MinValue) will return negative infinity. - - The next smaller floating point value. - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The absolute threshold for to consider it as zero. - Zero if || is smaller than , otherwise. - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero. - - The real number to coerce to zero, if it is almost zero. - Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. - - - - Determines the range of floating point numbers that will match the specified value with the given tolerance. - - The value. - The ulps difference. - - Thrown if is smaller than zero. - - Tuple of the bottom and top range ends. - - - - Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is - always bigger than the value) - - The value. - The ulps difference. - The maximum floating point number which is larger than the given . - - - - Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is - always smaller than the value) - - The value. - The ulps difference. - The minimum floating point number which is smaller than the given . - - - - Determines the range of ulps that will match the specified value with the given tolerance. - - The value. - The relative difference. - - Thrown if is smaller than zero. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Tuple with the number of ULPS between the value and the value - relativeDifference as first, - and the number of ULPS between the value and the value + relativeDifference as second value. - - - - - Evaluates the count of numbers between two double numbers - - The first parameter. - The second parameter. - The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. - The number of floating point values between and . - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive double or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive float or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive double or NaN) - Evaluates the positive epsilon. See also - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive float or NaN) - Evaluates the positive epsilon. See also - - - - - Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - - Positive Machine epsilon - - - - Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - - Machine epsilon - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - Thrown if is smaller than zero. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - - - Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps - between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance - of 1 is passed then the result will be true only if the two numbers have the same binary representation - OR if they are two adjacent numbers that only differ by one step. - - - The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article - at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to - .NET enabled code without using pointers and unsafe code. - - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two vectors and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Support Interface for Precision Operations (like AlmostEquals). - - Type of the implementing class. - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - A norm of this value. - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - The value to compare with. - A norm of the difference between this and the other value. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - Revision - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - Frees the memory allocated to the MKL memory pool. - - - - - Frees the memory allocated to the MKL memory pool on the current thread. - - - - - Disable the MKL memory pool. May impact performance. - - - - - Retrieves information about the MKL memory pool. - - On output, returns the number of memory buffers allocated. - Returns the number of bytes allocated to all memory buffers. - - - - Enable gathering of peak memory statistics of the MKL memory pool. - - - - - Disable gathering of peak memory statistics of the MKL memory pool. - - - - - Measures peak memory usage of the MKL memory pool. - - Whether the usage counter should be reset. - The peak number of bytes allocated to all memory buffers. - - - - Disable gathering memory usage - - - - - Enable gathering memory usage - - - - - Return peak memory usage - - - - - Return peak memory usage and reset counter - - - - - Consistency vs. performance trade-off between runs on different machines. - - - - Consistent on the same CPU only (maximum performance) - - - Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) - - - Consistent on Intel CPUs supporting SSE2 or later - - - Consistent on Intel CPUs supporting SSE4.2 or later - - - Consistent on Intel CPUs supporting AVX or later - - - Consistent on Intel CPUs supporting AVX2 or later - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Helper class to load native libraries depending on the architecture of the OS and process. - - - - - Dictionary of handles to previously loaded libraries, - - - - - Gets a string indicating the architecture and bitness of the current process. - - - - - If the last native library failed to load then gets the corresponding exception - which occurred or null if the library was successfully loaded. - - - - - Load the native library with the given filename. - - The file name of the library to load. - Hint path where to look for the native binaries. Can be null. - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing its name and a directory. - Tries to load an implementation suitable for the current CPU architecture - and process mode if there is a matching subfolder. - - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing the full path including the file name of the library. - - True if the library was successfully loaded or if it has already been loaded. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsFFTProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsFFTProvider" environment variable, - or fall back to the best provider. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 - will cause k*k in the Bluestein sequence to overflow (GH-286). - - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Half rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Fully rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Radix-2 Reorder Helper Method - - Sample type - Sample vector - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - How to transpose a matrix. - - - - - Don't transpose a matrix. - - - - - Transpose a matrix. - - - - - Conjugate transpose a complex matrix. - - If a conjugate transpose is used with a real matrix, then the matrix is just transposed. - - - - Types of matrix norms. - - - - - The 1-norm. - - - - - The Frobenius norm. - - - - - The infinity norm. - - - - - The largest absolute value norm. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - Supported data types are Double, Single, Complex, and Complex32. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiply elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the full QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by QR factor. This is only used for the managed provider and can be - null for the native provider. The native provider uses the Q portion stored in the R matrix. - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - On entry the B matrix; on exit the X matrix. - The number of columns of B. - On exit, the solution matrix. - Rows must be greater or equal to columns. - The type of QR factorization to perform. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Gets or sets the linear algebra provider. - Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsLAProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsLAProvider" environment variable, - or fall back to the best provider. - - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - Sets the desired bit consistency on repeated identical computations on varying CPU architectures, - as a trade-off with performance. - - VML optimal precision and rounding. - VML accuracy mode. - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the MKL provider. - - - - - Unable to allocate memory. - - - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the native OpenBLAS provider. - - - - - Unable to allocate memory. - - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - Uses and uses the value of - to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - Uses the value of to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - Uses - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - if set to true , the class is thread safe. - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Random number generator using Mersenne Twister 19937 algorithm. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - Uses the value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - A 32-bit combined multiple recursive generator with 2 components of order 3. - - Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. - - - The type bases upon the implementation in the - Boost Random Number Library. - It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on - Wikipedia - Lagged Fibonacci generator. - - - - - Default value for the ShortLag - - - - - Default value for the LongLag - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The ShortLag value - TheLongLag value - - - - Gets the short lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Gets the long lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Stores an array of random numbers - - - - - Stores an index for the random number array element that will be accessed next. - - - - - Fills the array with new unsigned random numbers. - - - Generated random numbers are 32-bit unsigned integers greater than or equal to 0 - and less than or equal to . - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - This class implements extension methods for the System.Random class. The extension methods generate - pseudo-random distributed numbers for types other than double and int32. - - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random bytes. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers within the specified range. - - The random number generator. - The array to fill with random values. - Lower bound, inclusive. - Upper bound, exclusive. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative random number less than . - - The random number generator. - - A 64-bit signed integer greater than or equal to 0, and less than ; that is, - the range of return values includes 0 but not . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int32 range. - - The random number generator. - - A 32-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int64 range. - - The random number generator. - - A 64-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative decimal floating point random number less than 1.0. - - The random number generator. - - A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, - the range of return values includes 0.0 but not 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random boolean. - - The random number generator. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Provides a time-dependent seed value, matching the default behavior of System.Random. - WARNING: There is no randomness in this seed and quick repeated calls can cause - the same seed value. Do not use for cryptography! - - - - - Provides a seed based on time and unique GUIDs. - WARNING: There is only low randomness in this seed, but at least quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. - WARNING: There is only medium randomness in this seed, but quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Base class for random number generators. This class introduces a layer between - and the Math.Net Numerics random number generators to provide thread safety. - When used directly it use the System.Random as random number source. - - - - - Initializes a new instance of the class using - the value of to set whether - the instance is thread safe or not. - - - - - Initializes a new instance of the class. - - if set to true , the class is thread safe. - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The array to fill with random values. - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The size of the array to fill. - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than . - - - - - Returns a random number less then a specified maximum. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - A 32-bit signed integer less than . - is zero or negative. - - - - Returns a random number within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. - - is greater than . - - - - Fills an array with random 32-bit signed integers greater than or equal to zero and less than . - - The array to fill with random values. - - - - Returns an array with random 32-bit signed integers greater than or equal to zero and less than . - - The size of the array to fill. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . - - - - - Returns an infinite sequence of random numbers within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Fills the elements of a specified array of bytes with random numbers. - - An array of bytes to contain random numbers. - is null. - - - - Returns a random number between 0.0 and 1.0. - - A double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 1982 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: - An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 2006 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". - Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. - Xn = a * Xn−3 + c mod 2^32 - http://www.jstatsoft.org/v08/i14/paper - - - - - The default value for X1. - - - - - The default value for X2. - - - - - The default value for the multiplier. - - - - - The default value for the carry over. - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Seed or last but three unsigned random number. - - - - - Last but two unsigned random number. - - - - - Last but one unsigned random number. - - - - - The value of the carry over. - - - - - The multiplier. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Note: must be less than . - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Xoshiro256** pseudo random number generator. - A random number generator based on the class in the .NET library. - - - This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has - excellent(sub-ns) speed, a state space(256 bits) that is large enough - for any parallel application, and it passes all tests we are aware of. - - For generating just floating-point numbers, xoshiro256+ is even faster. - - The state must be seeded so that it is not everywhere zero.If you have - a 64-bit seed, we suggest to seed a splitmix64 generator and use its - output to fill s. - - For further details see: - David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". - https://arxiv.org/abs/1805.01407 - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Splitmix64 RNG. - - RNG state. This can take any value, including zero. - A new random UInt64. - - Splitmix64 produces equidistributed outputs, thus if a zero is generated then the - next zero will be after a further 2^64 outputs. - - - - - Bisection root-finding algorithm. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Algorithm by Brent, Van Wijngaarden, Dekker et al. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Helper method useful for preventing rounding errors. - a*sign(b) - - - - Algorithm by Broyden. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - Relative step size for calculating the Jacobian matrix at first step. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Helper method to calculate an approximation of the Jacobian. - - The function. - The argument (initial guess). - The result (of initial guess). - Relative step size for calculating the Jacobian. - - - - Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 - Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html - - - - - Q and R are transformed variables. - - - - - n^(1/3) - work around a negative double raised to (1/3) - - - - - Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false - - - Detect a range containing at least one root. - The function to detect roots from. - Lower value of the range. - Upper value of the range - The growing factor of research. Usually 1.6. - Maximum number of iterations. Usually 50. - True if the bracketing operation succeeded, false otherwise. - This iterative methods stops when two values with opposite signs are found. - - - - Sorting algorithms for single, tuple and triple lists. - - - - - Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. - - The type of elements in the key list. - List to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a range of a list of keys, in place using the quick sort algorithm. - - The type of element in the list. - List to sort. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the primary list. - The type of elements in the secondary list. - List to sort. - List to sort on duplicate primary items, and permute the same way as the key list. - Comparison, defining the primary sort order. - Comparison, defining the secondary sort order. - - - - Recursive implementation for an in place quick sort on a list. - - The type of the list on which the quick sort is performed. - The list which is sorted using quick sort. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. - - The type of the list on which the quick sort is performed. - The type of the list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. - - The type of the list on which the quick sort is performed. - The type of the first list which is automatically reordered accordingly. - The type of the second list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The first list which is automatically reordered accordingly. - The second list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. - - The type of the primary list. - The type of the secondary list. - The list which is sorted using quick sort. - The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. - The method with which to compare two elements of the primary list. - The method with which to compare two elements of the secondary list. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Performs an in place swap of two elements in a list. - - The type of elements stored in the list. - The list in which the elements are stored. - The index of the first element of the swap. - The index of the second element of the swap. - - - - This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the error function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. - - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of Airy function Ai - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of the Airy function Ai. - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Ai. - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi(z). - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Computes the logarithm of the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The logarithm of the Euler Beta function evaluated at z,w. - If or are not positive. - - - - Computes the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The Euler Beta function evaluated at z,w. - If or are not positive. - - - - Returns the lower incomplete (unregularized) beta function - B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The lower incomplete (unregularized) beta function. - - - - Returns the regularized lower incomplete beta function - I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The regularized lower incomplete beta function. - - - - ************************************** - COEFFICIENTS FOR METHOD ErfImp * - ************************************** - - Polynomial coefficients for a numerator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a denominator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - - ************************************** - COEFFICIENTS FOR METHOD ErfInvImp * - ************************************** - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Calculates the error function. - The value to evaluate. - the error function evaluated at given value. - - - returns 1 if x == double.PositiveInfinity. - returns -1 if x == double.NegativeInfinity. - - - - - Calculates the complementary error function. - The value to evaluate. - the complementary error function evaluated at given value. - - - returns 0 if x == double.PositiveInfinity. - returns 2 if x == double.NegativeInfinity. - - - - - Calculates the inverse error function evaluated at z. - The inverse error function evaluated at given value. - - - returns double.PositiveInfinity if z >= 1.0. - returns double.NegativeInfinity if z <= -1.0. - - - Calculates the inverse error function evaluated at z. - value to evaluate. - the inverse error function evaluated at Z. - - - - Implementation of the error function. - - Where to evaluate the error function. - Whether to compute 1 - the error function. - the error function. - - - Calculates the complementary inverse error function evaluated at z. - The complementary inverse error function evaluated at given value. - We have tested this implementation against the arbitrary precision mpmath library - and found cases where we can only guarantee 9 significant figures correct. - - returns double.PositiveInfinity if z <= 0.0. - returns double.NegativeInfinity if z >= 2.0. - - - calculates the complementary inverse error function evaluated at z. - value to evaluate. - the complementary inverse error function evaluated at Z. - - - - The implementation of the inverse error function. - - First intermediate parameter. - Second intermediate parameter. - Third intermediate parameter. - the inverse error function. - - - - Computes the generalized Exponential Integral function (En). - - The argument of the Exponential Integral function. - Integer power of the denominator term. Generalization index. - The value of the Exponential Integral function. - - This implementation of the computation of the Exponential Integral function follows the derivation in - "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by - Dover Publications, New York), Chapters 6, 7, and 26. - AND - "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 - - - for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. - for 0 < x <= 1 uses Taylor series expansion - - Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. - - - - - Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up - to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. - - A value value! for value > 0 - - If you need to multiply or divide various such factorials, consider using the logarithmic version - instead so you can add instead of multiply and subtract instead of divide, and - then exponentiate the result using . This will also circumvent the problem that - factorials become very large even for small parameters. - - - - - - Computes the factorial of an integer. - - - - - Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. - - A value value! for value > 0 - - - - Computes the binomial coefficient: n choose k. - - A nonnegative value n. - A nonnegative value h. - The binomial coefficient: n choose k. - - - - Computes the natural logarithm of the binomial coefficient: ln(n choose k). - - A nonnegative value n. - A nonnegative value h. - The logarithmic binomial coefficient: ln(n choose k). - - - - Computes the multinomial coefficient: n choose n1, n2, n3, ... - - A nonnegative value n. - An array of nonnegative values that sum to . - The multinomial coefficient. - if is . - If or any of the are negative. - If the sum of all is not equal to . - - - - The order of the approximation. - - - - - Auxiliary variable when evaluating the function. - - - - - Polynomial coefficients for the approximation. - - - - - Computes the logarithm of the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. - - - - - Computes the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - - Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. - - - - - Returns the upper incomplete regularized gamma function - Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete regularized gamma function. - - - - Returns the upper incomplete gamma function - Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete gamma function. - - - - Returns the lower incomplete gamma function - gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the lower incomplete regularized gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the inverse P^(-1) of the regularized lower incomplete gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, - such that P^(-1)(a,P(a,x)) == x. - - - - - Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. - This implementation is based on - Jose Bernardo - Algorithm AS 103: - Psi ( Digamma ) Function, - Applied Statistics, - Volume 25, Number 3, 1976, pages 315-317. - Using the modifications as in Tom Minka's lightspeed toolbox. - - The argument of the digamma function. - The value of the DiGamma function at . - - - - Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will - only return solutions that are positive. - This implementation is based on the bisection method. - - The argument of the inverse digamma function. - The positive solution to the inverse DiGamma function at . - - - - Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Rising Factorial for x and n - - - - Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Falling Factorial for x and n - - - - A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. - This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation - see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function - - The list of coefficients in the numerator - The list of coefficients in the denominator - The variable in the power series - The value of the Generalized HyperGeometric Function. - - - - Returns the Hankel function of the first kind. - HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the first kind. - - - - Returns the exponentially scaled Hankel function of the first kind. - ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the first kind. - - - - Returns the Hankel function of the second kind. - HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the second kind. - - - - Returns the exponentially scaled Hankel function of the second kind. - ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the second kind. - - - - Computes the 'th Harmonic number. - - The Harmonic number which needs to be computed. - The t'th Harmonic number. - - - - Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) - - The order parameter. - The power parameter. - General Harmonic number. - - - - Returns the Kelvin function of the first kind. - KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function of the first kind. - - - - Returns the Kelvin function ber. - KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function ber. - KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(x) is equivalent to KelvinBer(0, x). - - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function bei. - KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the Kelvin function bei. - KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBei(x) is equivalent to KelvinBei(0, x). - - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the derivative of the Kelvin function ber. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function ber - - - - Returns the derivative of the Kelvin function ber. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ber. - - - - Returns the derivative of the Kelvin function bei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function bei. - - - - Returns the derivative of the Kelvin function bei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function bei. - - - - Returns the Kelvin function of the second kind - KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). - KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - The order of the Kelvin function. - The value to calculate the kelvin function of, - - - - - Returns the Kelvin function ker. - KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function ker. - KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKer(x) is equivalent to KelvinKer(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function kei. - KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the Kelvin function kei. - KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKei(x) is equivalent to KelvinKei(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the derivative of the Kelvin function ker. - - The order of the Kelvin function. - The non-negative real value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function ker. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function kei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Returns the derivative of the Kelvin function kei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic - - The parameter for which to compute the logistic function. - The logistic function of . - - - - Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit - - The parameter for which to compute the logit function. This number should be - between 0 and 1. - The logarithm of divided by 1.0 - . - - - - ************************************** - COEFFICIENTS FOR METHODS bessi0 * - ************************************** - - Chebyshev coefficients for exp(-x) I0(x) - in the interval [0, 8]. - - lim(x->0){ exp(-x) I0(x) } = 1. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I0(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessi1 * - ************************************** - - Chebyshev coefficients for exp(-x) I1(x) / x - in the interval [0, 8]. - - lim(x->0){ exp(-x) I1(x) / x } = 1/2. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I1(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk0, bessk0e * - ************************************** - - Chebyshev coefficients for K0(x) + log(x/2) I0(x) - in the interval [0, 2]. The odd order coefficients are all - zero; only the even order coefficients are listed. - - lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. - - - - Chebyshev coefficients for exp(x) sqrt(x) K0(x) - in the inverted interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk1, bessk1e * - ************************************** - - Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) - in the interval [0, 2]. - - lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. - - - - Chebyshev coefficients for exp(x) sqrt(x) K1(x) - in the interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). - - - - Returns the modified Bessel function of first kind, order 0 of the argument. -

- The function is defined as i0(x) = j0( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of first kind, - order 1 of the argument. -

- The function is defined as i1(x) = -i j1( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of the second kind - of order 0 of the argument. -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 0 of the argument. - - The value to compute the Bessel function of. - - - - Returns the modified Bessel function of the second kind - of order 1 of the argument. -

- The range is partitioned into the two intervals [0, 2] and - (2, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 1 of the argument. -

- k1e(x) = exp(x) * k1(x). -

- The value to compute the Bessel function of. - -
- - - Returns the modified Struve function of order 0. - - The value to compute the function of. - - - - Returns the modified Struve function of order 1. - - The value to compute the function of. - - - - Returns the difference between the Bessel I0 and Struve L0 functions. - - The value to compute the function of. - - - - Returns the difference between the Bessel I1 and Struve L1 functions. - - The value to compute the function of. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Numerically stable exponential minus one, i.e. x -> exp(x)-1 - - A number specifying a power. - Returns exp(power)-1. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Evaluation functions, useful for function approximation. - - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Numerically stable series summation - - provides the summands sequentially - Sum - - - Evaluates the series of Chebyshev polynomials Ti at argument x/2. - The series is given by -
-                  N-1
-                   - '
-            y  =   >   coef[i] T (x/2)
-                   -            i
-                  i=0
-            
- Coefficients are stored in reverse order, i.e. the zero - order term is last in the array. Note N is the number of - coefficients, not the order. -

- If coefficients are for the interval a to b, x must - have been transformed to x -> 2(2x - b - a)/(b-a) before - entering the routine. This maps x from (a, b) to (-1, 1), - over which the Chebyshev polynomials are defined. -

- If the coefficients are for the inverted interval, in - which (a, b) is mapped to (1/b, 1/a), the transformation - required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, - this becomes x -> 4a/x - 1. -

- SPEED: -

- Taking advantage of the recurrence properties of the - Chebyshev polynomials, the routine requires one more - addition per loop than evaluating a nested polynomial of - the same degree. -

- The coefficients of the polynomial. - Argument to the polynomial. - - Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs -

- Marked as Deprecated in - http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html - - - -

- Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. - - The no. of terms in the sequence. - The coefficients of the Chebyshev series, length n+1. - The value at which the series is to be evaluated. - - ORIGINAL AUTHOR: - Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND - REFERENCES: - "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" - J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 - -
- - - Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. - This function has a global minimum at (1,1) with f(1,1) = 0. - Common range: [-5,10] or [-2.048,2.048]. - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Valley-shaped Rosenbrock function for 2 or more dimensions. - This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 - This function has 4 global minima with f(x,y) = 0. - Common range: [-6,6]. - Named after David Mautner Himmelblau - - - https://en.wikipedia.org/wiki/Himmelblau%27s_function - - - - - Rastrigin, a highly multi-modal function with many local minima. - Global minimum of all zeros with f(0) = 0. - Common range: [-5.12,5.12]. - - - https://en.wikipedia.org/wiki/Rastrigin_function - http://www.sfu.ca/~ssurjano/rastr.html - - - - - Drop-Wave, a multi-modal and highly complex function with many local minima. - Global minimum of all zeros with f(0) = -1. - Common range: [-5.12,5.12]. - - - http://www.sfu.ca/~ssurjano/drop.html - - - - - Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. - Global minimum of all zeros with f(0) = 0. - Common range: [-32.768, 32.768]. - - - http://www.sfu.ca/~ssurjano/ackley.html - - - - - Bowl-shaped first Bohachevsky function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-100, 100] - - - http://www.sfu.ca/~ssurjano/boha.html - - - - - Plate-shaped Matyas function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-10, 10]. - - - http://www.sfu.ca/~ssurjano/matya.html - - - - - Valley-shaped six-hump camel back function. - Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). - Common range: x in [-3,3], y in [-2,2]. - - - http://www.sfu.ca/~ssurjano/camel6.html - - - - - Statistics operating on arrays assumed to be unsorted. - WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. - - - - - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - A class with correlation measures between two datasets. - - - - - Auto-correlation function (ACF) based on FFT for all possible lags k. - - Data array to calculate auto correlation for. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. - - The data array to calculate auto correlation for. - Max lag to calculate ACF for must be positive and smaller than x.Length. - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function based on FFT for lags k. - - The data array to calculate auto correlation for. - Array with lags to calculate ACF for. - An array with the ACF as a function of the lags k. - - - - The internal method for calculating the auto-correlation. - - The data array to calculate auto-correlation for - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length - Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length - An array with the ACF as a function of the lags k. - - - - Computes the Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - The Pearson product-moment correlation coefficient. - - - - Computes the Weighted Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - Corresponding weights of data. - The Weighted Pearson product-moment correlation coefficient. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Array of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Enumerable of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Spearman Ranked Correlation coefficient. - - Sample data series A. - Sample data series B. - The Spearman ranked correlation coefficient. - - - - Computes the Spearman Ranked Correlation matrix. - - Array of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the Spearman Ranked Correlation matrix. - - Enumerable of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the basic statistics of data set. The class meets the - NIST standard of accuracy for mean, variance, and standard deviation - (the only statistics they provide exact values for) and exceeds them - in increased accuracy mode. - Recommendation: consider to use RunningStatistics instead. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Gets the size of the sample. - - The size of the sample. - - - - Gets the sample mean. - - The sample mean. - - - - Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). - - The sample variance. - - - - Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). - - The sample standard deviation. - - - - Gets the sample skewness. - - The sample skewness. - Returns zero if is less than three. - - - - Gets the sample kurtosis. - - The sample kurtosis. - Returns zero if is less than four. - - - - Gets the maximum sample value. - - The maximum sample value. - - - - Gets the minimum sample value. - - The minimum sample value. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Internal use. Method use for setting the statistics. - - For setting Mean. - For setting Variance. - For setting Skewness. - For setting Kurtosis. - For setting Minimum. - For setting Maximum. - For setting Count. - - - - A consists of a series of s, - each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - This IComparer performs comparisons between a point and a bucket. - - - - - Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. - - The first bucket to compare. - The second bucket to compare. - -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. - - - - Lower Bound of the Bucket. - - - - - Upper Bound of the Bucket. - - - - - The number of datapoints in the bucket. - - - Value may be NaN if this was constructed as a argument. - - - - - Initializes a new instance of the Bucket class. - - - - - Constructs a Bucket that can be used as an argument for a - like when performing a Binary search. - - Value to look for - - - - Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. - - A cloned Bucket object. - - - - Width of the Bucket. - - - - - True if this is a single point argument for - when performing a Binary search. - - - - - Default comparer. - - - - - This method check whether a point is contained within this bucket. - - The point to check. - - 0 if the point falls within the bucket boundaries; - -1 if the point is smaller than the bucket, - +1 if the point is larger than the bucket. - - - - Comparison of two disjoint buckets. The buckets cannot be overlapping. - - - 0 if UpperBound and LowerBound are bit-for-bit equal - 1 if This bucket is lower that the compared bucket - -1 otherwise - - - - - Checks whether two Buckets are equal. - - - UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a - difference in Count given by . - - - - - Provides a hash code for this bucket. - - - - - Formats a human-readable string for this bucket. - - - - - A class which computes histograms of data. - - - - - Contains all the Buckets of the Histogram. - - - - - Indicates whether the elements of buckets are currently sorted. - - - - - Initializes a new instance of the Histogram class. - - - - - Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram - will be set to the smallest and largest datapoint. - - The data sequence to build a histogram on. - The number of buckets to use. - - - - Constructs a Histogram with a specific number of equally sized buckets. - - The data sequence to build a histogram on. - The number of buckets to use. - The histogram lower bound. - The histogram upper bound. - - - - Add one data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The datapoint which we want to add. - - - - Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The sequence of datapoints which we want to add. - - - - Adds a Bucket to the Histogram. - - - - - Sort the buckets if needed. - - - - - Returns the Bucket that contains the value v. - - The point to search the bucket for. - A copy of the bucket containing point . - - - - Returns the index in the Histogram of the Bucket - that contains the value v. - - The point to search the bucket index for. - The index of the bucket containing the point. - - - - Returns the lower bound of the histogram. - - - - - Returns the upper bound of the histogram. - - - - - Gets the n'th bucket. - - The index of the bucket to be returned. - A copy of the n'th bucket. - - - - Gets the number of buckets. - - - - - Gets the total number of datapoints in the histogram. - - - - - Prints the buckets contained in the . - - - - - Kernel density estimation (KDE). - - - - - Estimate the probability density function of a random variable. - - - The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. - - - - - Estimate the probability density function of a random variable with a Gaussian kernel. - - - - - Estimate the probability density function of a random variable with an Epanechnikov kernel. - The Epanechnikov kernel is optimal in a mean square error sense. - - - - - Estimate the probability density function of a random variable with a uniform kernel. - - - - - Estimate the probability density function of a random variable with a triangular kernel. - - - - - A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). - This kernel is the default. - - - - - Epanechnikov Kernel: - x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 - - - - - Uniform Kernel: - x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 - - - - - Triangular Kernel: - x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 - - - - - A hybrid Monte Carlo sampler for multivariate distributions. - - - - - Number of parameters in the density function. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of different components of the - momentum. - - - - - Gets or sets the standard deviations used in the sampling of different components of the - momentum. - - When the length of pSdv is not the same as Length. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - 1 using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the a random number generator provided by the user. - A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviations - given by pSdv. This constructor will set the burn interval, the method used for - numerical differentiation and the random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - The method used for numerical differentiation. - When the number of burnInterval iteration is negative. - When the length of pSdv is not the same as x0. - - - - Initialize parameters. - - The current location of the sampler. - - - - Checking that the location and the momentum are of the same dimension and that each component is positive. - - The standard deviations used for sampling the momentum. - When the length of pSdv is not the same as Length or if any - component is negative. - When pSdv is null. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the gradient. Uses a simple three point estimation. - - Function which the gradient is to be evaluated. - The location where the gradient is to be evaluated. - The gradient of the function at the point x. - - - - The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set - of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as - a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used - to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler - (). - - The type of samples this sampler produces. - - - - The delegate type that defines a derivative evaluated at a certain point. - - Function to be differentiated. - Value where the derivative is computed. - - - - Evaluates the energy function of the target distribution. - - - - - The current location of the sampler. - - - - - The number of burn iterations between two samples. - - - - - The size of each step in the Hamiltonian equation. - - - - - The number of iterations in the Hamiltonian equation. - - - - - The algorithm used for differentiation. - - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the number of iterations in the Hamiltonian equation. - - When frog leap steps is negative or zero. - - - - Gets or sets the size of each step in the Hamiltonian equation. - - When step size is negative or zero. - - - - Constructs a new Hybrid Monte Carlo sampler. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - Random number generator used for sampling the momentum. - The method used for differentiation. - When the number of burnInterval iteration is negative. - When either x0, pdfLnP or diff is null. - - - - Returns a sample from the distribution P. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Method used to update the sample location. Used in the end of the loop. - - The old energy. - The old gradient/derivative of the energy. - The new sample. - The new gradient/derivative of the energy. - The new energy. - The difference between the old Hamiltonian and new Hamiltonian. Use to determine - if an update should take place. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Method for doing dot product. - - First vector/scalar in the product. - Second vector/scalar in the product. - - - - Method for adding, multiply the second vector/scalar by factor and then - add it to the first vector/scalar. - - First vector/scalar. - Scalar factor multiplying by the second vector/scalar. - Second vector/scalar. - - - - Multiplying the second vector/scalar by factor and then subtract it from - the first vector/scalar. - - First vector/scalar. - Scalar factor to be multiplied to the second vector/scalar. - Second vector/scalar. - - - - Method for sampling a random momentum. - - Momentum to be randomized. - - - - The Hamiltonian equations that is used to produce the new sample. - - - - - Method to compute the Hamiltonian used in the method. - - The momentum. - The energy. - Hamiltonian=E+p.p/2 - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than or equal to zero. - Throws when value is negative. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than to zero. - Throws when value is negative or zero. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than zero. - Throws when value is negative or zero. - - - - Provides utilities to analysis the convergence of a set of samples from - a . - - - - - Computes the auto correlations of a series evaluated by a function f. - - The series for computing the auto correlation. - The lag in the series - The function used to evaluate the series. - The auto correlation. - Throws if lag is zero or if lag is - greater than or equal to the length of Series. - - - - Computes the effective size of the sample when evaluated by a function f. - - The samples. - The function use for evaluating the series. - The effective size when auto correlation is taken into account. - - - - A method which samples datapoints from a proposal distribution. The implementation of this sampler - is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it doesn't take any parameters; it samples random - variables from the whole domain. - - The type of the datapoints. - A sample from the proposal distribution. - - - - A method which samples datapoints from a proposal distribution given an initial sample. The implementation - of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it samples locally around an initial point. In other words, it - makes a small local move rather than producing a global sample from the proposal. - - The type of the datapoints. - The initial sample. - A sample from the proposal distribution. - - - - A function which evaluates a density. - - The type of data the distribution is over. - The sample we want to evaluate the density for. - - - - A function which evaluates a log density. - - The type of data the distribution is over. - The sample we want to evaluate the log density for. - - - - A function which evaluates the log of a transition kernel probability. - - The type for the space over which this transition kernel is defined. - The new state in the transition. - The previous state in the transition. - The log probability of the transition. - - - - The interface which every sampler must implement. - - The type of samples this sampler produces. - - - - The random number generator for this class. - - - - - Keeps track of the number of accepted samples. - - - - - Keeps track of the number of calls to the proposal sampler. - - - - - Initializes a new instance of the class. - - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Gets or sets the random number generator. - - When the random number generator is null. - - - - Returns one sample. - - - - - Returns a number of samples. - - The number of samples we want. - An array of samples. - - - - Gets the acceptance rate of the sampler. - - - - - Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the - proposal distribution Q is symmetric in comparison to . It does need to - be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. - - The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the target distribution. - - - - - Evaluates the log transition probability for the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis-Hastings sampler using the default random number generator. This - constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - The log transition probability for the proposal distribution. - A method that samples from the proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal - distribution Q is symmetric. All densities are required to be in log space. - - The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the sampling distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis sampler using the default random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - A method that samples from the symmetric proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to - to be normalized, but we do need that for each x, P(x) < Q(x). - - The type of samples this sampler produces. - - - - Evaluates the density function of the sampling distribution. - - - - - Evaluates the density function of the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - Constructs a new rejection sampler using the default random number generator. - - The density of the distribution we want to sample from. - The density of the proposal distribution. - A method that samples from the proposal distribution. - - - - Returns a sample from the distribution P. - - When the algorithms detects that the proposal - distribution doesn't upper bound the target distribution. - - - - A hybrid Monte Carlo sampler for univariate distributions. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of the - momentum. - - - - - Gets or sets the standard deviation used in the sampling of the - momentum. - - When standard deviation is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using a random - number generator provided by the user. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - Random number generator used to sample the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - given by pSdv using a random - number generator provided by the user. This constructor will set both the burn interval and the method used for - numerical differentiation. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - The method used for numerical differentiation. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the derivative. Uses a simple three point estimation. - - Function for which the derivative is to be evaluated. - The location where the derivative is to be evaluated. - The derivative of the function at the point x. - - - - Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using - a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. - - The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - - - - Evaluates the log density function of the target distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - The scale of the slice sampler. - - - - - Constructs a new Slice sampler using the default random - number generator. The burn interval will be set to 0. - - The initial sample. - The density of the distribution we want to sample from. - The scale factor of the slice sampler. - When the scale of the slice sampler is not positive. - - - - Constructs a new slice sampler using the default random number generator. It - will set the number of burnInterval iterations and run a burnInterval phase. - - The initial sample. - The density of the distribution we want to sample from. - The number of iterations in between returning samples. - The scale factor of the slice sampler. - When the number of burnInterval iteration is negative. - When the scale of the slice sampler is not positive. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the scale of the slice sampler. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Running statistics over a window of data, allows updating by adding values. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - Replace ties with their mean (non-integer ranks). Default. - - - Replace ties with their minimum (typical sports ranking). - - - Replace ties with their maximum. - - - Permutation with increasing values at each index of ties. - - - - Running statistics accumulator, allows updating by adding values - or by combining two accumulators. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Evaluates the population skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - - - - Evaluates the population kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - - Create a new running statistics over the combined samples of two existing running statistics. - - - - - Statistics operating on an array already sorted ascendingly. - - - - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Extension methods to return basic statistics on set of data. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - The full population data. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - The full population data. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. - Uses a normalizer (Bessel's correction; type 2). - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness and kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - - The full population data. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - The full population data. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - Null-entries are ignored. - - The data sample sequence. - - - - Evaluates the sample mean over a moving window, for each samples. - Returns NaN if no data is empty or if any entry is NaN. - - The sample stream to calculate the mean of. - The number of last samples to consider. - - - - Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. - Can be used in a streaming way, e.g. on large datasets not fitting into memory. - - - - - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Calculates the entropy of a stream of double values. - Returns NaN if any of the values in the stream are NaN. - - The input stream to evaluate. - - - - - Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. - - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The body to be invoked for each iteration range. - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The partition size for splitting work into smaller pieces. - The body to be invoked for each iteration range. - - - - Executes each of the provided actions inside a discrete, asynchronous task. - - An array of actions to execute. - The actions array contains a null element. - At least one invocation of the actions threw an exception. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Double-precision trigonometry toolkit. - - - - - Constant to convert a degree to grad. - - - - - Converts a degree (360-periodic) angle to a grad (400-periodic) angle. - - The degree to convert. - The converted grad angle. - - - - Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. - - The degree to convert. - The converted radian angle. - - - - Converts a grad (400-periodic) angle to a degree (360-periodic) angle. - - The grad to convert. - The converted degree. - - - - Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. - - The grad to convert. - The converted radian. - - - - Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. - - The radian to convert. - The converted degree. - - - - Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. - - The radian to convert. - The converted grad. - - - - Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). - - - - - Trigonometric Sine of an angle in radian, or opposite / hypotenuse. - - The angle in radian. - The sine of the radian angle. - - - - Trigonometric Sine of a Complex number. - - The complex value. - The sine of the complex number. - - - - Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. - - The angle in radian. - The cosine of an angle in radian. - - - - Trigonometric Cosine of a Complex number. - - The complex value. - The cosine of a complex number. - - - - Trigonometric Tangent of an angle in radian, or opposite / adjacent. - - The angle in radian. - The tangent of the radian angle. - - - - Trigonometric Tangent of a Complex number. - - The complex value. - The tangent of the complex number. - - - - Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. - - The angle in radian. - The cotangent of an angle in radian. - - - - Trigonometric Cotangent of a Complex number. - - The complex value. - The cotangent of the complex number. - - - - Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. - - The angle in radian. - The secant of the radian angle. - - - - Trigonometric Secant of a Complex number. - - The complex value. - The secant of the complex number. - - - - Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. - - The angle in radian. - Cosecant of an angle in radian. - - - - Trigonometric Cosecant of a Complex number. - - The complex value. - The cosecant of a complex number. - - - - Trigonometric principal Arc Sine in radian - - The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Sine of this Complex number. - - The complex value. - The arc sine of a complex number. - - - - Trigonometric principal Arc Cosine in radian - - The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Cosine of this Complex number. - - The complex value. - The arc cosine of a complex number. - - - - Trigonometric principal Arc Tangent in radian - - The opposite for a unit adjacent (i.e. opposite / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Tangent of this Complex number. - - The complex value. - The arc tangent of a complex number. - - - - Trigonometric principal Arc Cotangent in radian - - The adjacent for a unit opposite (i.e. adjacent / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cotangent of this Complex number. - - The complex value. - The arc cotangent of a complex number. - - - - Trigonometric principal Arc Secant in radian - - The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Secant of this Complex number. - - The complex value. - The arc secant of a complex number. - - - - Trigonometric principal Arc Cosecant in radian - - The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cosecant of this Complex number. - - The complex value. - The arc cosecant of a complex number. - - - - Hyperbolic Sine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic sine of the angle. - - - - Hyperbolic Sine of a Complex number. - - The complex value. - The hyperbolic sine of a complex number. - - - - Hyperbolic Cosine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic Cosine of the angle. - - - - Hyperbolic Cosine of a Complex number. - - The complex value. - The hyperbolic cosine of a complex number. - - - - Hyperbolic Tangent in radian - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic tangent of the angle. - - - - Hyperbolic Tangent of a Complex number. - - The complex value. - The hyperbolic tangent of a complex number. - - - - Hyperbolic Cotangent - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cotangent of the angle. - - - - Hyperbolic Cotangent of a Complex number. - - The complex value. - The hyperbolic cotangent of a complex number. - - - - Hyperbolic Secant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic secant of the angle. - - - - Hyperbolic Secant of a Complex number. - - The complex value. - The hyperbolic secant of a complex number. - - - - Hyperbolic Cosecant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cosecant of the angle. - - - - Hyperbolic Cosecant of a Complex number. - - The complex value. - The hyperbolic cosecant of a complex number. - - - - Hyperbolic Area Sine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Sine of this Complex number. - - The complex value. - The hyperbolic arc sine of a complex number. - - - - Hyperbolic Area Cosine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosine of this Complex number. - - The complex value. - The hyperbolic arc cosine of a complex number. - - - - Hyperbolic Area Tangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Tangent of this Complex number. - - The complex value. - The hyperbolic arc tangent of a complex number. - - - - Hyperbolic Area Cotangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cotangent of this Complex number. - - The complex value. - The hyperbolic arc cotangent of a complex number. - - - - Hyperbolic Area Secant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Secant of this Complex number. - - The complex value. - The hyperbolic arc secant of a complex number. - - - - Hyperbolic Area Cosecant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosecant of this Complex number. - - The complex value. - The hyperbolic arc cosecant of a complex number. - - - - Hamming window. Named after Richard Hamming. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hamming window. Named after Richard Hamming. - Periodic version, useful e.g. for FFT purposes. - - - - - Hann window. Named after Julius von Hann. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hann window. Named after Julius von Hann. - Periodic version, useful e.g. for FFT purposes. - - - - - Cosine window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Cosine window. - Periodic version, useful e.g. for FFT purposes. - - - - - Lanczos window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Lanczos window. - Periodic version, useful e.g. for FFT purposes. - - - - - Gauss window. - - - - - Blackman window. - - - - - Blackman-Harris window. - - - - - Blackman-Nuttall window. - - - - - Bartlett window. - - - - - Bartlett-Hann window. - - - - - Nuttall window. - - - - - Flat top window. - - - - - Uniform rectangular (Dirichlet) window. - - - - - Triangular window. - - - - - Tukey tapering window. A rectangular window bounded - by half a cosine window on each side. - - Width of the window - Fraction of the window occupied by the cosine parts - -
-
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll deleted file mode 100755 index 506eade..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml deleted file mode 100755 index 4652128..0000000 --- a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml +++ /dev/null @@ -1,53895 +0,0 @@ - - - - MathNet.Numerics - - - - - Useful extension methods for Arrays. - - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Enumerative Combinatorics and Counting. - - - - - Count the number of possible variations without repetition. - The order matters and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of distinct variations. - - - - Count the number of possible variations with repetition. - The order matters and each object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of distinct variations with repetition. - - - - Count the number of possible combinations without repetition. - The order does not matter and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of combinations. - - - - Count the number of possible combinations with repetition. - The order does not matter and an object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of combinations with repetition. - - - - Count the number of possible permutations (without repetition). - - Number of (distinguishable) elements in the set. - Maximum number of permutations without repetition. - - - - Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. - Implemented using Fisher-Yates Shuffling. - - An array of length N that contains (in any order) the integers of the interval [0, N). - Number of (distinguishable) elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation, without repetition, from a data array by reordering the provided array in-place. - Implemented using Fisher-Yates Shuffling. The provided data array will be modified. - - The data array to be reordered. The array will be modified by this routine. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation from a data sequence by returning the provided data in random order. - Implemented using Fisher-Yates Shuffling. - - The data elements to be reordered. - The random number generator to use. Optional; the default random source will be used if null. - - - - Generate a random combination, without repetition, by randomly selecting some of N elements. - - Number of elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Generate a random combination, without repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Select a random combination, without repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination, in the original order. - - - - Generates a random combination, with repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - Integer mask array of length N, for each item the number of times it was selected. - - - - Select a random combination, with repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination with repetition, in the original order. - - - - Generate a random variation, without repetition, by randomly selecting k of n elements with order. - Implemented using partial Fisher-Yates Shuffling. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. - Implemented using partial Fisher-Yates Shuffling. - - The data source to choose from. - Number of elements (k) to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation, in random order. - - - - Generate a random variation, with repetition, by randomly selecting k of n elements with order. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation with repetition, in random order. - - - - 32-bit single precision complex numbers class. - - - - The class Complex32 provides all elementary operations - on complex numbers. All the operators +, -, - *, /, ==, != are defined in the - canonical way. Additional complex trigonometric functions - are also provided. Note that the Complex32 structures - has two special constant values and - . - - - - Complex32 x = new Complex32(1f,2f); - Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); - Complex32 z = (x + y) / (x - y); - - - - For mathematical details about complex numbers, please - have a look at the - Wikipedia - - - - - - The real component of the complex number. - - - - - The imaginary component of the complex number. - - - - - Initializes a new instance of the Complex32 structure with the given real - and imaginary parts. - - The value for the real component. - The value for the imaginary component. - - - - Creates a complex number from a point's polar coordinates. - - A complex number. - The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. - The phase, which is the angle from the line to the horizontal axis, measured in radians. - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to one and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to one. - - - - - Returns a new instance - with real and imaginary numbers positive infinite. - - - - - Returns a new instance - with real and imaginary numbers not a number. - - - - - Gets the real component of the complex number. - - The real component of the complex number. - - - - Gets the real imaginary component of the complex number. - - The real imaginary component of the complex number. - - - - Gets the phase or argument of this Complex32. - - - Phase always returns a value bigger than negative Pi and - smaller or equal to Pi. If this Complex32 is zero, the Complex32 - is assumed to be positive real with an argument of zero. - - The phase or argument of this Complex32 - - - - Gets the magnitude (or absolute value) of a complex number. - - Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN - The magnitude of the current instance. - - - - Gets the squared magnitude (or squared absolute value) of a complex number. - - The squared magnitude of the current instance. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex32. - - - - Gets a value indicating whether the Complex32 is zero. - - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - - true if this instance is ; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - - true if this instance is real nonnegative number; otherwise, false. - - - - - Exponential of this Complex32 (exp(x), E^x). - - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex32 (Base E). - - The natural logarithm of this complex number. - - - - Common Logarithm of this Complex32 (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex32 with custom base. - - The logarithm of this complex number. - - - - Raise this Complex32 to the given value. - - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex32 to the inverse of the given value. - - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex32 - - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex32 - - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex32. - - - - - Evaluate all cubic roots of this Complex32. - - - - - Equality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real and imaginary components of the two complex numbers are equal; false otherwise. - - - - Inequality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real or imaginary components of the two complex numbers are not equal; false otherwise. - - - - Unary addition. - - The complex number to operate on. - Returns the same complex number. - - - - Unary minus. - - The complex number to operate on. - The negated value of the . - - - Addition operator. Adds two complex numbers together. - The result of the addition. - One of the complex numbers to add. - The other complex numbers to add. - - - Subtraction operator. Subtracts two complex numbers. - The result of the subtraction. - The complex number to subtract from. - The complex number to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The complex numbers to add. - The float value to add. - - - Subtraction operator. Subtracts float value from a complex value. - The result of the subtraction. - The complex number to subtract from. - The float value to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The float value to add. - The complex numbers to add. - - - Subtraction operator. Subtracts complex value from a float value. - The result of the subtraction. - The float vale to subtract from. - The complex value to subtract. - - - Multiplication operator. Multiplies two complex numbers. - The result of the multiplication. - One of the complex numbers to multiply. - The other complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The float value to multiply. - The complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The complex number to multiply. - The float value to multiply. - - - Division operator. Divides a complex number by another. - Enhanced Smith's algorithm for dividing two complex numbers - - The result of the division. - The dividend. - The divisor. - - - - Helper method for dividing. - - Re first - Im first - Re second - Im second - - - - - Division operator. Divides a float value by a complex number. - Algorithm based on Smith's algorithm - - The result of the division. - The dividend. - The divisor. - - - Division operator. Divides a complex number by a float value. - The result of the division. - The dividend. - The divisor. - - - - Computes the conjugate of a complex number and returns the result. - - - - - Returns the multiplicative inverse of a complex number. - - - - - Converts the value of the current complex number to its equivalent string representation in Cartesian form. - - The string representation of the current instance in Cartesian form. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format for its real and imaginary parts. - - The string representation of the current instance in Cartesian form. - A standard or custom numeric format string. - - is not a valid format string. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified culture-specific formatting information. - - The string representation of the current instance in Cartesian form, as specified by . - An object that supplies culture-specific formatting information. - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. - The string representation of the current instance in Cartesian form, as specified by and . - A standard or custom numeric format string. - An object that supplies culture-specific formatting information. - - is not a valid format string. - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - The hash code for the complex number. - - - The hash code of the complex number. - - - The hash code is calculated as - System.Math.Exp(ComplexMath.Absolute(complexNumber)). - - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as float. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Explicit conversion of a real decimal to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Explicit conversion of a Complex to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Implicit conversion of a real byte to a Complex32. - - The byte value to convert. - The result of the conversion. - - - - Implicit conversion of a real short to a Complex32. - - The short value to convert. - The result of the conversion. - - - - Implicit conversion of a signed byte to a Complex32. - - The signed byte value to convert. - The result of the conversion. - - - - Implicit conversion of a unsigned real short to a Complex32. - - The unsigned short value to convert. - The result of the conversion. - - - - Implicit conversion of a real int to a Complex32. - - The int value to convert. - The result of the conversion. - - - - Implicit conversion of a BigInteger int to a Complex32. - - The BigInteger value to convert. - The result of the conversion. - - - - Implicit conversion of a real long to a Complex32. - - The long value to convert. - The result of the conversion. - - - - Implicit conversion of a real uint to a Complex32. - - The uint value to convert. - The result of the conversion. - - - - Implicit conversion of a real ulong to a Complex32. - - The ulong value to convert. - The result of the conversion. - - - - Implicit conversion of a real float to a Complex32. - - The float value to convert. - The result of the conversion. - - - - Implicit conversion of a real double to a Complex32. - - The double value to convert. - The result of the conversion. - - - - Converts this Complex32 to a . - - A with the same values as this Complex32. - - - - Returns the additive inverse of a specified complex number. - - The result of the real and imaginary components of the value parameter multiplied by -1. - A complex number. - - - - Computes the conjugate of a complex number and returns the result. - - The conjugate of . - A complex number. - - - - Adds two complex numbers and returns the result. - - The sum of and . - The first complex number to add. - The second complex number to add. - - - - Subtracts one complex number from another and returns the result. - - The result of subtracting from . - The value to subtract from (the minuend). - The value to subtract (the subtrahend). - - - - Returns the product of two complex numbers. - - The product of the and parameters. - The first complex number to multiply. - The second complex number to multiply. - - - - Divides one complex number by another and returns the result. - - The quotient of the division. - The complex number to be divided. - The complex number to divide by. - - - - Returns the multiplicative inverse of a complex number. - - The reciprocal of . - A complex number. - - - - Returns the square root of a specified complex number. - - The square root of . - A complex number. - - - - Gets the absolute value (or magnitude) of a complex number. - - The absolute value of . - A complex number. - - - - Returns e raised to the power specified by a complex number. - - The number e raised to the power . - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a complex number. - - The complex number raised to the power . - A complex number to be raised to a power. - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a single-precision floating-point number. - - The complex number raised to the power . - A complex number to be raised to a power. - A single-precision floating-point number that specifies a power. - - - - Returns the natural (base e) logarithm of a specified complex number. - - The natural (base e) logarithm of . - A complex number. - - - - Returns the logarithm of a specified complex number in a specified base. - - The logarithm of in base . - A complex number. - The base of the logarithm. - - - - Returns the base-10 logarithm of a specified complex number. - - The base-10 logarithm of . - A complex number. - - - - Returns the sine of the specified complex number. - - The sine of . - A complex number. - - - - Returns the cosine of the specified complex number. - - The cosine of . - A complex number. - - - - Returns the tangent of the specified complex number. - - The tangent of . - A complex number. - - - - Returns the angle that is the arc sine of the specified complex number. - - The angle which is the arc sine of . - A complex number. - - - - Returns the angle that is the arc cosine of the specified complex number. - - The angle, measured in radians, which is the arc cosine of . - A complex number that represents a cosine. - - - - Returns the angle that is the arc tangent of the specified complex number. - - The angle that is the arc tangent of . - A complex number. - - - - Returns the hyperbolic sine of the specified complex number. - - The hyperbolic sine of . - A complex number. - - - - Returns the hyperbolic cosine of the specified complex number. - - The hyperbolic cosine of . - A complex number. - - - - Returns the hyperbolic tangent of the specified complex number. - - The hyperbolic tangent of . - A complex number. - - - - Extension methods for the Complex type provided by System.Numerics - - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex. - - - - Gets the conjugate of the Complex number. - - The number to perform this operation on. - - The semantic of setting the conjugate is such that - - // a, b of type Complex32 - a.Conjugate = b; - - is equivalent to - - // a, b of type Complex32 - a = b.Conjugate - - - The conjugate of the number. - - - - Returns the multiplicative inverse of a complex number. - - - - - Exponential of this Complex (exp(x), E^x). - - The number to perform this operation on. - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex (Base E). - - The number to perform this operation on. - - The natural logarithm of this complex number. - - - - - Common Logarithm of this Complex (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex with custom base. - - The logarithm of this complex number. - - - - Raise this Complex to the given value. - - The number to perform this operation on. - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex to the inverse of the given value. - - The number to perform this operation on. - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex - - The number to perform this operation on. - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex - - The number to perform this operation on. - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex. - - - - - Evaluate all cubic roots of this Complex. - - - - - Gets a value indicating whether the Complex32 is zero. - - The number to perform this operation on. - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - The number to perform this operation on. - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - The number to perform this operation on. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - The number to perform this operation on. - - true if this instance is NaN; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - The number to perform this operation on. - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - The number to perform this operation on. - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - The number to perform this operation on. - - true if this instance is real nonnegative number; otherwise, false. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - The string to parse. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as double. - - - - - Converts the string representation of a complex number to a double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - A collection of frequently used mathematical constants. - - - - The number e - - - The number log[2](e) - - - The number log[10](e) - - - The number log[e](2) - - - The number log[e](10) - - - The number log[e](pi) - - - The number log[e](2*pi)/2 - - - The number 1/e - - - The number sqrt(e) - - - The number sqrt(2) - - - The number sqrt(3) - - - The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 - - - The number sqrt(3)/2 - - - The number pi - - - The number pi*2 - - - The number pi/2 - - - The number pi*3/2 - - - The number pi/4 - - - The number sqrt(pi) - - - The number sqrt(2pi) - - - The number sqrt(pi/2) - - - The number sqrt(2*pi*e) - - - The number log(sqrt(2*pi)) - - - The number log(sqrt(2*pi*e)) - - - The number log(2 * sqrt(e / pi)) - - - The number 1/pi - - - The number 2/pi - - - The number 1/sqrt(pi) - - - The number 1/sqrt(2pi) - - - The number 2/sqrt(pi) - - - The number 2 * sqrt(e / pi) - - - The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). - - - - - The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). - - - - - The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). - - - The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. - - - The Catalan constant - Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } - - - The Euler-Mascheroni constant - lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } - - - The number (1+sqrt(5))/2, also known as the golden ratio - - - The Glaisher constant - e^(1/12 - Zeta(-1)) - - - The Khinchin constant - prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} - - - - The size of a double in bytes. - - - - - The size of an int in bytes. - - - - - The size of a float in bytes. - - - - - The size of a Complex in bytes. - - - - - The size of a Complex in bytes. - - - - Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) - - - Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) - - - Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) - - - Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) - - - Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) - - - Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) - - - Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) - - - Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) - - - Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) - - - Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) - - - Planck time: t_p = l_p/c_0 [s] (2007 CODATA) - - - Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) - - - Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) - - - Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) - - - Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) - - - Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) - - - Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) - - - Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) - - - Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) - - - Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) - - - Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) - - - Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) - - - Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) - - - Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) - - - Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) - - - Electron Mass: [kg] (2007 CODATA) - - - Electron Mass Energy Equivalent: [J] (2007 CODATA) - - - Electron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Electron Compton Wavelength: [m] (2007 CODATA) - - - Classical Electron Radius: [m] (2007 CODATA) - - - Thomson Cross Section: [m^2] (2002 CODATA) - - - Electron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Electon G-Factor: [1] (2007 CODATA) - - - Muon Mass: [kg] (2007 CODATA) - - - Muon Mass Energy Equivalent: [J] (2007 CODATA) - - - Muon Molar Mass: [kg mol^-1] (2007 CODATA) - - - Muon Compton Wavelength: [m] (2007 CODATA) - - - Muon Magnetic Moment: [J T^-1] (2007 CODATA) - - - Muon G-Factor: [1] (2007 CODATA) - - - Tau Mass: [kg] (2007 CODATA) - - - Tau Mass Energy Equivalent: [J] (2007 CODATA) - - - Tau Molar Mass: [kg mol^-1] (2007 CODATA) - - - Tau Compton Wavelength: [m] (2007 CODATA) - - - Proton Mass: [kg] (2007 CODATA) - - - Proton Mass Energy Equivalent: [J] (2007 CODATA) - - - Proton Molar Mass: [kg mol^-1] (2007 CODATA) - - - Proton Compton Wavelength: [m] (2007 CODATA) - - - Proton Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton G-Factor: [1] (2007 CODATA) - - - Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Neutron Mass: [kg] (2007 CODATA) - - - Neutron Mass Energy Equivalent: [J] (2007 CODATA) - - - Neutron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Neuron Compton Wavelength: [m] (2007 CODATA) - - - Neutron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Neutron G-Factor: [1] (2007 CODATA) - - - Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Deuteron Mass: [kg] (2007 CODATA) - - - Deuteron Mass Energy Equivalent: [J] (2007 CODATA) - - - Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Helion Mass: [kg] (2007 CODATA) - - - Helion Mass Energy Equivalent: [J] (2007 CODATA) - - - Helion Molar Mass: [kg mol^-1] (2007 CODATA) - - - Avogadro constant: [mol^-1] (2010 CODATA) - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 - - - The SI prefix factor corresponding to 1 000 - - - The SI prefix factor corresponding to 100 - - - The SI prefix factor corresponding to 10 - - - The SI prefix factor corresponding to 0.1 - - - The SI prefix factor corresponding to 0.01 - - - The SI prefix factor corresponding to 0.001 - - - The SI prefix factor corresponding to 0.000 001 - - - The SI prefix factor corresponding to 0.000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 - - - - Sets parameters for the library. - - - - - Use a specific provider if configured, e.g. using - environment variables, or fall back to the best providers. - - - - - Use the best provider available. - - - - - Gets or sets a value indicating whether the distribution classes check validate each parameter. - For the multivariate distributions this could involve an expensive matrix factorization. - The default setting of this property is true. - - - - - Gets or sets a value indicating whether to use thread safe random number generators (RNG). - Thread safe RNG about two and half time slower than non-thread safe RNG. - - - true to use thread safe random number generators ; otherwise, false. - - - - - Optional path to try to load native provider binaries from. - - - - - Gets or sets a value indicating how many parallel worker threads shall be used - when parallelization is applicable. - - Default to the number of processor cores, must be between 1 and 1024 (inclusive). - - - - Gets or sets the TaskScheduler used to schedule the worker tasks. - - - - - Gets or sets the order of the matrix when linear algebra provider - must calculate multiply in parallel threads. - - The order. Default 64, must be at least 3. - - - - Gets or sets the number of elements a vector or matrix - must contain before we multiply threads. - - Number of elements. Default 300, must be at least 3. - - - - Numerical Derivative. - - - - - Initialized a NumericalDerivative with the given points and center. - - - - - Initialized a NumericalDerivative with the default points and center for the given order. - - - - - Evaluates the derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - Derivative order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Univariate function handle. - Derivative order. - - - - Evaluates the first derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the first derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the second derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the second derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - - - - Evaluates the partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - - - - Class to calculate finite difference coefficients using Taylor series expansion method. - - - For n points, coefficients are calculated up to the maximum derivative order possible (n-1). - The current function value position specifies the "center" for surrounding coefficients. - Selecting the first, middle or last positions represent forward, backwards and central difference methods. - - - - - - - Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. - - - - - Initializes a new instance of the class. - - Number of finite difference coefficients. - - - - Gets the finite difference coefficients for a specified center and order. - - Current function position with respect to coefficients. Must be within point range. - Order of finite difference coefficients. - Vector of finite difference coefficients. - - - - Gets the finite difference coefficients for all orders at a specified center. - - Current function position with respect to coefficients. Must be within point range. - Rectangular array of coefficients, with columns specifying order. - - - - Type of finite different step size. - - - - - The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. - - - - - A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however - this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the - function input parameter and not the order of the finite difference derivative. - - - - - A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order - and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a - base step size, h, that is equivalent to scaling. This step size is then scaled according to the function - input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). - - - - - Class to evaluate the numerical derivative of a function using finite difference approximations. - Variable point and center methods can be initialized . - This class can also be used to return function handles (delegates) for a fixed derivative order and variable. - It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. - - - - - Initializes a NumericalDerivative class with the default 3 point center difference method. - - - - - Initialized a NumericalDerivative class. - - Number of points for finite difference derivatives. - Location of the center with respect to other points. Value ranges from zero to points-1. - - - - Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. - If the base step size used in scaling is desired, see . - - - Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a - base-2 representable number to improve finite difference accuracy. - - - - - Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. - However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. - - - - - Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. - By default this is set to machine epsilon, from which is computed. - - - - - Sets and gets the location of the center point for the finite difference derivative. - - - - - Number of times a function is evaluated for numerical derivatives. - - - - - Type of step size for computing finite differences. If set to absolute, dx = h. - If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when - h is approximately equal to the square-root of machine accuracy, epsilon. - - - - - Evaluates the derivative of equidistant points using the finite difference method. - - Vector of points StepSize apart. - Derivative order. - Finite difference step size. - Derivative of points of the specified order. - - - - Evaluates the derivative of a scalar univariate function. - - - Supplying the optional argument currentValue will reduce the number of function evaluations - required to calculate the finite difference derivative. - - Function handle. - Point at which to compute the derivative. - Derivative order. - Current function value at center. - Function derivative at x of the specified order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Input function handle. - Derivative order. - Function handle that evaluates the derivative of input function at a fixed order. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Function partial derivative at x of the specified order. - - - - Evaluates the partial derivatives of a multivariate function array. - - - This function assumes the input vector x is of the correct length for f. - - Multivariate vector function array handle. - Vector at which to evaluate the derivatives. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Vector of functions partial derivatives at x of the specified order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at a fixed order. - - - - Creates a function handle for the partial derivative of a vector multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at fixed order. - - - - Evaluates the mixed partial derivative of variable order for multivariate functions. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function handle. - Points at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivative at x of the specified order. - - - - Evaluates the mixed partial derivative of variable order for multivariate function arrays. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function array handle. - Vector at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivatives at x of the specified order. - - - - Creates a function handle for the mixed partial derivative of a multivariate function. - - Input function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Creates a function handle for the mixed partial derivative of a multivariate vector function. - - Input vector function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Resets the evaluation counter. - - - - - Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Hessian object with a three point central difference method. - - - - - Creates a numerical Hessian with a specified differentiation scheme. - - Number of points for Hessian evaluation. - Center point for differentiation. - - - - Evaluates the Hessian of the scalar univariate function f at points x. - - Scalar univariate function handle. - Point at which to evaluate Hessian. - Hessian tensor. - - - - Evaluates the Hessian of a multivariate function f at points x. - - - This method of computing the Hessian is only valid for Lipschitz continuous functions. - The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. - - Multivariate function handle.> - Points at which to evaluate Hessian.> - Hessian tensor. - - - - Resets the function evaluation counter for the Hessian. - - - - - Class for evaluating the Jacobian of a function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Jacobian object with a three point central difference method. - - - - - Creates a numerical Jacobian with a specified differentiation scheme. - - Number of points for Jacobian evaluation. - Center point for differentiation. - - - - Evaluates the Jacobian of scalar univariate function f at point x. - - Scalar univariate function handle. - Point at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x. - - - This function assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x given a current function value. - - - To minimize the number of function evaluations, a user can supply the current value of the function - to be used in computing the Jacobian. This value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Current function value at finite difference center. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function array f at vector x. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Jacobian matrix. - - - - Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. - - - To minimize the number of function evaluations, a user can supply a vector of current values of the functions - to be used in computing the Jacobian. These value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Vector of current function values. - Jacobian matrix. - - - - Resets the function evaluation counter for the Jacobian. - - - - - Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Double-Exponential integration. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The number of Gauss-Legendre points. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Gauss-Kronrod integration. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the differintegral of order n at x. - - - - Metrics to measure the distance between two structures. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Pearson's distance, i.e. 1 - the person correlation coefficient. - - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Discrete Univariate Bernoulli distribution. - The Bernoulli distribution is a distribution over bits. The parameter - p specifies the probability that a 1 is generated. - Wikipedia - Bernoulli distribution. - - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - If the Bernoulli parameter is not in the range [0,1]. - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - If the Bernoulli parameter is not in the range [0,1]. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Generates one sample from the Bernoulli distribution. - - The random source to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A random sample from the Bernoulli distribution. - - - - Samples a Bernoulli distributed random variable. - - A sample from the Bernoulli distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Continuous Univariate Beta distribution. - For details about this distribution, see - Wikipedia - Beta distribution. - - - There are a few special cases for the parameterization of the Beta distribution. When both - shape parameters are positive infinity, the Beta distribution degenerates to a point distribution - at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point - distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution - degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the - distribution degenerates to a point distribution at the non-zero shape parameter. - - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - A string representation of the Beta distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. - - - - - Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Beta distribution. - - - - - Gets the variance of the Beta distribution. - - - - - Gets the standard deviation of the Beta distribution. - - - - - Gets the entropy of the Beta distribution. - - - - - Gets the skewness of the Beta distribution. - - - - - Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the Beta distribution. - - - - - Gets the minimum of the Beta distribution. - - - - - Gets the maximum of the Beta distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Beta distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Beta distribution. - - a sequence of samples from the distribution. - - - - Samples Beta distributed random variables by sampling two Gamma variables and normalizing. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a random number from the Beta distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Beta-Binomial distribution. - The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising - when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. - The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. - It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. - Wikipedia - Beta-Binomial distribution. - - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a random number from the BetaBinomial distribution. - - - - Samples a BetaBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of BetaBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a BetaBinomial distributed random variable. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Samples an array of BetaBinomial distributed random variables. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - - - - Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast - is used to construct an underlying beta distribution. - - The minimum value. - The maximum value. - The most likely value (mode). - The random number generator which is used to draw random samples. - The Beta distribution derived from the PERT parameters. - - - - A string representation of the distribution. - - A string representation of the BetaScaled distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. - - - - - Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. - - - - - Gets the location (μ) of the BetaScaled distribution. - - - - - Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the BetaScaled distribution. - - - - - Gets the variance of the BetaScaled distribution. - - - - - Gets the standard deviation of the BetaScaled distribution. - - - - - Gets the entropy of the BetaScaled distribution. - - - - - Gets the skewness of the BetaScaled distribution. - - - - - Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the BetaScaled distribution. - - - - - Gets the minimum of the BetaScaled distribution. - - - - - Gets the maximum of the BetaScaled distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Binomial distribution. - For details about this distribution, see - Wikipedia - Binomial distribution. - - - The distribution is parameterized by a probability (between 0.0 and 1.0). - - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - If is not in the interval [0.0,1.0]. - If is negative. - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The random number generator which is used to draw random samples. - If is not in the interval [0.0,1.0]. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - - - - Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. - - - - - Gets the number of trials. Range: n ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the Binomial distribution without doing parameter checking. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successful trials. - - - - Samples a Binomially distributed random variable. - - The number of successes in N trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Binomially distributed random variables. - - a sequence of successes in N trials. - - - - Samples a binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Samples a binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Gets the scale (a) of the distribution. Range: a > 0. - - - - - Gets the first shape parameter (c) of the distribution. Range: c > 0. - - - - - Gets the second shape parameter (k) of the distribution. Range: k > 0. - - - - - Initializes a new instance of the Burr Type XII class. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Burr distribution. - - - - - Gets the variance of the Burr distribution. - - - - - Gets the standard deviation of the Burr distribution. - - - - - Gets the mode of the Burr distribution. - - - - - Gets the minimum of the Burr distribution. - - - - - Gets the maximum of the Burr distribution. - - - - - Gets the entropy of the Burr distribution (currently not supported). - - - - - Gets the skewness of the Burr distribution. - - - - - Gets the median of the Burr distribution. - - - - - Generates a sample from the Burr distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the Burr distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Discrete Univariate Categorical distribution. - For details about this distribution, see - Wikipedia - Categorical distribution. This - distribution is sometimes called the Discrete distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - Support: 0..k where k = length(probability mass array)-1 - - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class from a . The distribution - will not be automatically updated when the histogram changes. The categorical distribution will have - one value for each bucket and a probability for that value proportional to the bucket count. - - The histogram from which to create the categorical variable. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Gets the probability mass vector (non-negative ratios) of the multinomial. - - Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a . - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets he mode of the distribution. - - Throws a . - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the cumulative distribution function. This method performs no parameter checking. - If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - An array representing the unnormalized cumulative distribution function. - - - - Returns one trials from the categorical distribution. - - The random number generator to use. - The (unnormalized) cumulative distribution of the probability distribution. - One sample from the categorical distribution implied by . - - - - Samples a Binomially distributed random variable. - - The number of successful trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of successful trial counts. - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Continuous Univariate Cauchy distribution. - The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see - Wikipedia - Cauchy distribution. - - - - - Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 - - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Gets the location (x0) of the distribution. - - - - - Gets the scale (γ) of the distribution. Range: γ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi distribution. - This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal - components are independent and each follow a standard normal distribution. The length of the vector will - then have a chi distribution. - Wikipedia - Chi distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Chi distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Chi distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi-Squared distribution. - This distribution is a sum of the squares of k independent standard normal random variables. - Wikipedia - ChiSquare distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ChiSquare distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ChiSquare distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - Generates a sample from the ChiSquare distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sample from the ChiSquare distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Continuous Univariate Uniform distribution. - The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see - Wikipedia - Continuous uniform distribution. - - - - - Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. - - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - If the upper bound is smaller than the lower bound. - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - If the upper bound is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Generates a sample from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Discrete Univariate Conway-Maxwell-Poisson distribution. - The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli - distributions. It is parameterized by two real numbers "lambda" and "nu". For - - nu = 0 the distribution reverts to a Geometric distribution - nu = 1 the distribution reverts to the Poisson distribution - nu -> infinity the distribution converges to a Bernoulli distribution - - This implementation will cache the value of the normalization constant. - Wikipedia - ConwayMaxwellPoisson distribution. - - - - - The mean of the distribution. - - - - - The variance of the distribution. - - - - - Caches the value of the normalization constant. - - - - - Since many properties of the distribution can only be computed approximately, the tolerance - level specifies how much error we accept. - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Gets the lambda (λ) parameter. Range: λ > 0. - - - - - Gets the rate of decay (ν) parameter. Range: ν ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the cumulative distribution at location . - - - - - Gets the normalization constant of the Conway-Maxwell-Poisson distribution. - - - - - Computes an approximate normalization constant for the CMP distribution. - - The lambda (λ) parameter for the CMP distribution. - The rate of decay (ν) parameter for the CMP distribution. - - an approximate normalization constant for the CMP distribution. - - - - - Returns one trials from the distribution. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The z parameter. - - One sample from the distribution implied by , , and . - - - - - Samples a Conway-Maxwell-Poisson distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. - - - a sequence of samples from a Conway-Maxwell-Poisson distribution. - - - - - Samples a random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Multivariate Dirichlet distribution. For details about this distribution, see - Wikipedia - Dirichlet distribution. - - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - No parameter can be less than zero and at least one parameter should be larger than zero. - - The parameters of the Dirichlet distribution. - - - - Gets or sets the parameters of the Dirichlet distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the dimension of the Dirichlet distribution. - - - - - Gets the sum of the Dirichlet parameters. - - - - - Gets the mean of the Dirichlet distribution. - - - - - Gets the variance of the Dirichlet distribution. - - - - - Gets the entropy of the distribution. - - - - - Computes the density of the distribution. - - The locations at which to compute the density. - the density at . - The Dirichlet distribution requires that the sum of the components of x equals 1. - You can also leave out the last component, and it will be computed from the others. - - - - Computes the log density of the distribution. - - The locations at which to compute the density. - the density at . - - - - Samples a Dirichlet distributed random vector. - - A sample from this distribution. - - - - Samples a Dirichlet distributed random vector. - - The random number generator to use. - The Dirichlet distribution parameter. - a sample from the distribution. - - - - Discrete Univariate Uniform distribution. - The discrete uniform distribution is a distribution over integers. The distribution - is parameterized by a lower and upper bound (both inclusive). - Wikipedia - Discrete uniform distribution. - - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Gets the inclusive lower bound of the probability distribution. - - - - - Gets the inclusive upper bound of the probability distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. - - The random source to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A random sample from the discrete uniform distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of uniformly distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a uniformly distributed random variable. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Samples a uniformly distributed random variable. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Continuous Univariate Erlang distribution. - This distribution is a continuous probability distribution with wide applicability primarily due to its - relation to the exponential and Gamma distributions. - Wikipedia - Erlang distribution. - - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Erlang distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The scale (μ) of the Erlang distribution. Range: μ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. - - - - - Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - - Gets the scale of the Erlang distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum value. - - - - - Gets the Maximum value. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Erlang distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Erlang distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Exponential distribution. - The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. - Wikipedia - exponential distribution. - - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Exponential distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Draws a random sample from the distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. - For details about this distribution, see - Wikipedia - FisherSnedecor distribution. - - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. - - - - - Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the FisherSnedecor distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the FisherSnedecor distribution. - - a sequence of samples from the distribution. - - - - Generates one sample from the FisherSnedecor distribution without parameter checking. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a FisherSnedecor distributed random number. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Gamma distribution. - For details about this distribution, see - Wikipedia - Gamma distribution. - - - The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want - to specify a Gamma distribution which is a point distribution we set the shape parameter to be the - location of the point distribution and the inverse scale as positive infinity. The distribution - with shape and inverse scale both zero is undefined. - - Random number generation for the Gamma distribution is based on the algorithm in: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Gamma distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Gamma distribution. Range: k ≥ 0. - The scale (θ) of the Gamma distribution. Range: θ ≥ 0 - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. - - - - - Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - - Gets or sets the scale (θ) of the Gamma distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Gamma distribution. - - - - - Gets the variance of the Gamma distribution. - - - - - Gets the standard deviation of the Gamma distribution. - - - - - Gets the entropy of the Gamma distribution. - - - - - Gets the skewness of the Gamma distribution. - - - - - Gets the mode of the Gamma distribution. - - - - - Gets the median of the Gamma distribution. - - - - - Gets the minimum of the Gamma distribution. - - - - - Gets the maximum of the Gamma distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Gamma distribution. - - a sequence of samples from the distribution. - - - - Sampling implementation based on: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - This method performs no parameter checks. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - A sample from a Gamma distributed random variable. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Geometric distribution. - The Geometric distribution is a distribution over positive integers parameterized by one positive real number. - This implementation of the Geometric distribution will never generate 0's. - Wikipedia - geometric distribution. - - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a not supported exception. - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Returns one sample from the distribution. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - One sample from the distribution implied by . - - - - Samples a Geometric distributed random variable. - - A sample from the Geometric distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Geometric distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Discrete Univariate Hypergeometric distribution. - This distribution is a discrete probability distribution that describes the number of successes in a sequence - of n draws from a finite population without replacement, just as the binomial distribution - describes the number of successes for draws with replacement - Wikipedia - Hypergeometric distribution. - - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the size of the population (N). - - - - - Gets the number of draws without replacement (n). - - - - - Gets the number successes within the population (K, M). - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the cumulative distribution at location . - - - - - Generates a sample from the Hypergeometric distribution without doing parameter checking. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The n parameter of the distribution. - a random number from the Hypergeometric distribution. - - - - Samples a Hypergeometric distributed random variable. - - The number of successes in n trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Hypergeometric distributed random variables. - - a sequence of successes in n trials. - - - - Samples a random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Continuous Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by a double. - - - - - Gets the largest element in the domain of the distribution which can be represented by a double. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Discrete Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by an integer. - - - - - Gets the largest element in the domain of the distribution which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Probability Distribution. - - - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Continuous Univariate Inverse Gamma distribution. - The inverse Gamma distribution is a distribution over the positive real numbers parameterized by - two positive parameters. - Wikipedia - InverseGamma distribution. - - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Gets or sets the shape (α) parameter. Range: α > 0. - - - - - Gets or sets The scale (β) parameter. Range: β > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Throws . - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Gets the mean (μ) of the distribution. Range: μ > 0. - - - - - Gets the shape (λ) of the distribution. Range: λ > 0. - - - - - Initializes a new instance of the InverseGaussian class. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Inverse Gaussian distribution. - - - - - Gets the variance of the Inverse Gaussian distribution. - - - - - Gets the standard deviation of the Inverse Gaussian distribution. - - - - - Gets the median of the Inverse Gaussian distribution. - No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. - - - - - Gets the minimum of the Inverse Gaussian distribution. - - - - - Gets the maximum of the Inverse Gaussian distribution. - - - - - Gets the skewness of the Inverse Gaussian distribution. - - - - - Gets the kurtosis of the Inverse Gaussian distribution. - - - - - Gets the mode of the Inverse Gaussian distribution. - - - - - Gets the entropy of the Inverse Gaussian distribution (currently not supported). - - - - - Generates a sample from the inverse Gaussian distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the inverse Gaussian distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the inverse Gaussian distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - An Inverse Gaussian distribution. - - - - Multivariate Inverse Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution - is the conjugate prior for the covariance matrix of a multivariate normal distribution. - Wikipedia - Inverse-Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. - - - - - Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. - - - - Gets the variance of the distribution. - - The variance of the distribution. - Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. - - - - Evaluates the probability density function for the inverse Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - a sample from the distribution. - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - The random number generator to use. - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - a sample from the distribution. - - - - Univariate Probability Distribution. - - - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Continuous Univariate Laplace distribution. - The Laplace distribution is a distribution over the real numbers parameterized by a mean and - scale parameter. The PDF is: - p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. - Wikipedia - Laplace distribution. - - - - - Initializes a new instance of the class (location = 0, scale = 1). - - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - If is negative. - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - - - - Gets the location (μ) of the Laplace distribution. - - - - - Gets the scale (b) of the Laplace distribution. Range: b > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples a Laplace distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sample from the Laplace distribution. - - a sample from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Log-Normal distribution. - For details about this distribution, see - Wikipedia - Log-Normal distribution. - - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the logarithm of the distribution. - The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a log-normal distribution with the desired mu and sigma parameters. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Constructs a log-normal distribution with the desired mean and variance. - - The mean of the log-normal distribution. - The variance of the log-normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Estimates the log-normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - MATLAB: lognfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - - - - Gets the log-scale (μ) (mean of the logarithm) of the distribution. - - - - - Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mu of the log-normal distribution. - - - - - Gets the variance of the log-normal distribution. - - - - - Gets the standard deviation of the log-normal distribution. - - - - - Gets the entropy of the log-normal distribution. - - - - - Gets the skewness of the log-normal distribution. - - - - - Gets the mode of the log-normal distribution. - - - - - Gets the median of the log-normal distribution. - - - - - Gets the minimum of the log-normal distribution. - - - - - Gets the maximum of the log-normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the density at . - - MATLAB: lognpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: logncdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: logninv - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Multivariate Matrix-valued Normal distributions. The distribution - is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix - for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. - Wikipedia - MatrixNormal distribution. - - - - - The mean of the matrix normal distribution. - - - - - The covariance matrix for the rows. - - - - - The covariance matrix for the columns. - - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - The random number generator which is used to draw random samples. - If the dimensions of the mean and two covariance matrices don't match. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - - - - Gets the mean. (M) - - The mean of the distribution. - - - - Gets the row covariance. (V) - - The row covariance. - - - - Gets the column covariance. (K) - - The column covariance. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Evaluates the probability density function for the matrix normal distribution. - - The matrix at which to evaluate the density at. - the density at - If the argument does not have the correct dimensions. - - - - Samples a matrix normal distributed random variable. - - A random number from this distribution. - - - - Samples a matrix normal distributed random variable. - - The random number generator to use. - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - a sequence of samples from the distribution. - - - - Samples a vector normal distributed random variable. - - The random number generator to use. - The mean of the vector normal distribution. - The covariance matrix of the vector normal distribution. - a sequence of samples from defined distribution. - - - - Multivariate Multinomial distribution. For details about this distribution, see - Wikipedia - Multinomial distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - - - Stores the normalized multinomial probabilities. - - - - - The number of trials. - - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class from histogram . The distribution will - not be automatically updated when the histogram changes. - - Histogram instance - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative returns false, - if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. - - - - Gets the proportion of ratios. - - - - - Gets the number of trials. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Computes values of the probability mass function. - - Non-negative integers x1, ..., xk - The probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Computes values of the log probability mass function. - - Non-negative integers x1, ..., xk - The log probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Samples one multinomial distributed random variable. - - the counts for each of the different possible values. - - - - Samples a sequence multinomially distributed random variables. - - a sequence of counts for each of the different possible values. - - - - Samples one multinomial distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - the counts for each of the different possible values. - - - - Samples a multinomially distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of variables needed. - a sequence of counts for each of the different possible values. - - - - Discrete Univariate Negative Binomial distribution. - The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special - case that r is an integer one can interpret the distribution as the number of failures before the r'th success - when the probability of success is p. - Wikipedia - NegativeBinomial distribution. - - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Gets the number of successes. Range: r ≥ 0. - - - - - Gets the probability of success. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Samples a negative binomial distributed random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - a sample from the distribution. - - - - Samples a NegativeBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of NegativeBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Continuous Univariate Normal distribution, also known as Gaussian distribution. - For details about this distribution, see - Wikipedia - Normal distribution. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a normal distribution from a mean and standard deviation. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - a normal distribution. - - - - Constructs a normal distribution from a mean and variance. - - The mean (μ) of the normal distribution. - The variance (σ^2) of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Constructs a normal distribution from a mean and precision. - - The mean (μ) of the normal distribution. - The precision of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Estimates the normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - MATLAB: normfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Gets the mean (μ) of the normal distribution. - - - - - Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - - Gets the variance of the normal distribution. - - - - - Gets the precision of the normal distribution. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the entropy of the normal distribution. - - - - - Gets the skewness of the normal distribution. - - - - - Gets the mode of the normal distribution. - - - - - Gets the median of the normal distribution. - - - - - Gets the minimum of the normal distribution. - - - - - Gets the maximum of the normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the density at . - - MATLAB: normpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: normcdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: norminv - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - This structure represents the type over which the distribution - is defined. - - - - - Initializes a new instance of the struct. - - The mean of the pair. - The precision of the pair. - - - - Gets or sets the mean of the pair. - - - - - Gets or sets the precision of the pair. - - - - - Multivariate Normal-Gamma Distribution. - The distribution is the conjugate prior distribution for the - distribution. It specifies a prior over the mean and precision of the distribution. - It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the - precision inverse scale. - The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). - The following degenerate cases are special: when the precision is known, - the precision shape will encode the value of the precision while the precision inverse scale is positive - infinity. When the mean is known, the mean location will encode the value of the mean while the scale - will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. - Wikipedia - Normal-Gamma distribution. - - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Gets the location of the mean. - - - - - Gets the scale of the mean. - - - - - Gets the shape of the precision. - - - - - Gets the inverse scale of the precision. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Returns the marginal distribution for the mean of the NormalGamma distribution. - - the marginal distribution for the mean of the NormalGamma distribution. - - - - Returns the marginal distribution for the precision of the distribution. - - The marginal distribution for the precision of the distribution/ - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the variance of the distribution. - - The mean of the distribution. - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - Density value - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - Density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - The log of the density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - The log of the density value - - - - Generates a sample from the NormalGamma distribution. - - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - a sequence of samples from the distribution. - - - - Generates a sample from the NormalGamma distribution. - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sequence of samples from the distribution. - - - - Continuous Univariate Pareto distribution. - The Pareto distribution is a power law probability distribution that coincides with social, - scientific, geophysical, actuarial, and many other types of observable phenomena. - For details about this distribution, see - Wikipedia - Pareto distribution. - - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - If or are negative. - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The random number generator which is used to draw random samples. - If or are negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Pareto distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Poisson distribution. - - - Distribution is described at Wikipedia - Poisson distribution. - Knuth's method is used to generate Poisson distributed random variables. - f(x) = exp(-λ)*λ^x/x!; - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - If is equal or less then 0.0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - If is equal or less then 0.0. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - - - - Gets the Poisson distribution parameter λ. Range: λ > 0. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Generates one sample from the Poisson distribution. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by Knuth's method. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by "Rejection method PA". - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, - Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) - The article is on pages 29-35. The algorithm given here is on page 32. - - - - Samples a Poisson distributed random variable. - - A sample from the Poisson distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Poisson distributed random variables. - - a sequence of successes in N trials. - - - - Samples a Poisson distributed random variable. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Samples a Poisson distributed random variable. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Rayleigh distribution. - The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an - example of how it arises, the wind speed will have a Rayleigh distribution if the components of - the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. - For details about this distribution, see - Wikipedia - Rayleigh distribution. - - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - If is negative. - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the scale (σ) of the distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Rayleigh distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (σ) of the distribution. Range: σ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (σ) of the distribution. Range: σ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized Error Distribution (SGED). - Implements the univariate SSkewed Generalized Error Distribution. For details about this - distribution, see - - Wikipedia - Generalized Error Distribution. - It includes Laplace, Normal and Student-t distributions. - This is the distribution with q=Inf. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution - with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Generates a sample from the Skew Generalized Error distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized Error distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized T-distribution. - Implements the univariate Skewed Generalized t-distribution. For details about this - distribution, see - - Wikipedia - Skewed generalized t-distribution. - The skewed generalized t-distribution contains many different distributions within it - as special cases based on the parameterization chosen. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution - with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Given a parameter set, returns the distribution that matches this parameterization. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - Null if no known distribution matches the parameterization, else the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the inverse cumulative density at . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Skew Generalized t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Stable distribution. - A random variable is said to be stable (or to have a stable distribution) if it has - the property that a linear combination of two independent copies of the variable has - the same distribution, up to location and scale parameters. - For details about this distribution, see - Wikipedia - Stable distribution. - - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. - - - - - Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - - - - - Gets the scale (c) of the distribution. Range: c > 0. - - - - - Gets the location (μ) of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets he entropy of the distribution. - - Always throws a not supported exception. - - - - Gets the skewness of the distribution. - - Throws a not supported exception of Alpha != 2. - - - - Gets the mode of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the median of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) - - - - Samples the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a random number from the distribution. - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Stable distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Continuous Univariate Student's T-distribution. - Implements the univariate Student t-distribution. For details about this - distribution, see - - Wikipedia - Student's t-distribution. - - We use a slightly generalized version (compared to - Wikipedia) of the Student t-distribution. Namely, one which also - parameterizes the location and scale. See the book "Bayesian Data - Analysis" by Gelman et al. for more details. - The density of the Student t-distribution p(x|mu,scale,dof) = - Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / - (Gamma(dof/2)*Sqrt(dof*pi*scale)). - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. This might involve heavy - computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 - scale 1.0 and degrees of freedom 1. - - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Gets the location (μ) of the Student t-distribution. - - - - - Gets the scale (σ) of the Student t-distribution. Range: σ > 0. - - - - - Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Student t-distribution. - - - - - Gets the variance of the Student t-distribution. - - - - - Gets the standard deviation of the Student t-distribution. - - - - - Gets the entropy of the Student t-distribution. - - - - - Gets the skewness of the Student t-distribution. - - - - - Gets the mode of the Student t-distribution. - - - - - Gets the median of the Student t-distribution. - - - - - Gets the minimum of the Student t-distribution. - - - - - Gets the maximum of the Student t-distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Samples student-t distributed random variables. - - The algorithm is method 2 in section 5, chapter 9 - in L. Devroye's "Non-Uniform Random Variate Generation" - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a random number from the standard student-t distribution. - - - - Generates a sample from the Student t-distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Student t-distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Student t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Student t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Triangular distribution. - For details, see Wikipedia - Triangular distribution. - - The distribution will use the by default. - Users can get/set the random number generator by using the property. - The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The random number generator which is used to draw random samples. - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets or sets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Triangular distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Generates a sample from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Initializes a new instance of the TruncatedPareto class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The random number generator which is used to draw random samples. - If or are non-positive or if T ≤ xm. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets the truncation (T) of the distribution. Range: T > 0. - - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Gets the mean of the truncated Pareto distribution. - - - - - Gets the variance of the truncated Pareto distribution. - - - - - Gets the standard deviation of the truncated Pareto distribution. - - - - - Gets the mode of the truncated Pareto distribution (not supported). - - - - - Gets the minimum of the truncated Pareto distribution. - - - - - Gets the maximum of the truncated Pareto distribution. - - - - - Gets the entropy of the truncated Pareto distribution (not supported). - - - - - Gets the skewness of the truncated Pareto distribution. - - - - - Gets the median of the truncated Pareto distribution. - - - - - Generates a sample from the truncated Pareto distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Continuous Univariate Weibull distribution. - For details about this distribution, see - Wikipedia - Weibull distribution. - - - The Weibull distribution is parametrized by a shape and scale parameter. - - - - - Reusable intermediate result 1 / (_scale ^ _shape) - - - By caching this parameter we can get slightly better numerics precision - in certain constellations without any additional computations. - - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Gets the shape (k) of the Weibull distribution. Range: k > 0. - - - - - Gets the scale (λ) of the Weibull distribution. Range: λ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Weibull distribution. - - - - - Gets the variance of the Weibull distribution. - - - - - Gets the standard deviation of the Weibull distribution. - - - - - Gets the entropy of the Weibull distribution. - - - - - Gets the skewness of the Weibull distribution. - - - - - Gets the mode of the Weibull distribution. - - - - - Gets the median of the Weibull distribution. - - - - - Gets the minimum of the Weibull distribution. - - - - - Gets the maximum of the Weibull distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Generates a sample from the Weibull distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Weibull distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos - - - - Returns a Weibull distribution. - - - - Generates a sample from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Multivariate Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution - is the conjugate prior for the precision (inverse covariance) matrix of the multivariate - normal distribution. - Wikipedia - Wishart distribution. - - - - - The degrees of freedom for the Wishart distribution. - - - - - The scale matrix for the Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The random number generator which is used to draw random samples. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Gets or sets the degrees of freedom (n) for the Wishart distribution. - - - - - Gets or sets the scale matrix (V) for the Wishart distribution. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - - - - Gets the variance of the distribution. - - The variance of the distribution. - - - - Evaluates the probability density function for the Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - A random number from this distribution. - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The cholesky decomposition to use. - a random number from the distribution. - - - - Discrete Univariate Zipf distribution. - Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact - that many types of data studied in the physical and social sciences can be approximated with - a Zipfian distribution, one of a family of related discrete power law probability distributions. - For details about this distribution, see - Wikipedia - Zipf distribution. - - - - - The s parameter of the distribution. - - - - - The n parameter of the distribution. - - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Gets or sets the s parameter of the distribution. - - - - - Gets or sets the n parameter of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The s parameter of the distribution. - The n parameter of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the Zipf distribution without doing parameter checking. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - a random number from the Zipf distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of zipf distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Integer number theory functions. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Find out whether the provided 32 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 64 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 32 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 64 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 32 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 64 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Raises 2 to the provided integer exponent (0 <= exponent < 31). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Raises 2 to the provided integer exponent (0 <= exponent < 63). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Evaluate the binary logarithm of an integer number. - - Two-step method using a De Bruijn-like sequence table lookup. - - - - Find the closest perfect power of two that is larger or equal to the provided - 32 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Find the closest perfect power of two that is larger or equal to the provided - 64 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's - algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the greatest common divisor (gcd) of two big integers. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two big integers. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Collection of functions equivalent to those provided by Microsoft Excel - but backed instead by Math.NET Numerics. - We do not recommend to use them except in an intermediate phase when - porting over solutions previously implemented in Excel. - - - - - An algorithm failed to converge. - - - - - An algorithm failed to converge due to a numerical breakdown. - - - - - An error occurred calling native provider function. - - - - - An error occurred calling native provider function. - - - - - Native provider was unable to allocate sufficient memory. - - - - - Native provider failed LU inversion do to a singular U matrix. - - - - - Compound Monthly Return or Geometric Return or Annualized Return - - - - - Average Gain or Gain Mean - This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) - and then dividing the total by the number of gain periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Average Loss or LossMean - This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) - and then dividing the total by the number of loss periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain - and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. - © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. - - - - - Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then - measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. - - http://www.offshore-library.com/kb/statistics.php - - - - This measure is similar to the loss standard deviation except the downside deviation - considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. - For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below - 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for - the losing periods, and then measure the variation between each losing return and the losing return average). - - - - - A measure of volatility in returns below the mean. It's similar to standard deviation, but it only - looks at periods where the investment return was less than average return. - - - - - Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing - period. Periods can be monthly or quarterly depending on the data frequency. - - - - - Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - The missing gradient is evaluated numerically (forward difference). - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - - Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" - The roots of the polynomial - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The polynomial. - The roots of the polynomial - - - - Find all roots of the Chebychev polynomial of the first kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) - - - - Find all roots of the Chebychev polynomial of the second kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) - - - - Least-Squares Curve Fitting Routines - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as [a, b] array, - where a is the intercept and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning its best fitting parameters as (a, r) tuple. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning a function y' for the best fitting polynomial. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning a function y' for the best fitting combination. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning its best fitting parameter p. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning its best fitting parameter p0 and p1. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning its best fitting parameter p0, p1 and p2. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning a function y' for the best fitting curve. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate a linearly spaced sample vector of the given length between the specified values (inclusive). - Equivalent to MATLAB linspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). - - - - - Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). - Equivalent to MATLAB logspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - - - - - Create a periodic wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic wave. - - The number of samples to generate. - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a Sine wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite Sine wave sequence. - - Samples per unit. - Frequency in samples per unit. - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic square wave, starting with the high phase. - - The number of samples to generate. - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create an infinite periodic square wave sequence, starting with the high phase. - - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create a periodic triangle wave, starting with the raise phase from the lowest sample. - - The number of samples to generate. - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. - - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create a periodic sawtooth wave, starting with the lowest sample. - - The number of samples to generate. - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. - - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an array with each field set to the same value. - - The number of samples to generate. - The value that each field should be set to. - - - - Create an infinite sequence where each element has the same value. - - The value that each element should be set to. - - - - Create a Heaviside Step sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. - - - - Create an infinite Heaviside Step sample sequence. - - The maximal reached peak. - Offset to the time axis. - - - - Create a Kronecker Delta impulse sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - The maximal reached peak. - Offset to the time axis, hence the sample index of the impulse. - - - - Create a periodic Kronecker Delta impulse sample vector. - - The number of samples to generate. - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Generate samples generated by the given computation. - - - - - Generate an infinite sequence generated by the given computation. - - - - - Generate a Fibonacci sequence, including zero as first value. - - - - - Generate an infinite Fibonacci sequence, including zero as first value. - - - - - Create random samples, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create an infinite random sample sequence, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create samples with independent amplitudes of standard distribution. - - - - - Create an infinite sample sequence with independent amplitudes of standard distribution. - - - - - Create samples with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Generate samples by sampling a function at samples from a probability distribution. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution. - - - - - Globalized String Handling Helpers - - - - - Tries to get a from the format provider, - returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format - provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Globalized Parsing: Tokenize a node by splitting it into several nodes. - - Node that contains the trimmed string to be tokenized. - List of keywords to tokenize by. - keywords to skip looking for (because they've already been handled). - - - - Globalized Parsing: Parse a double number - - First token of the number. - The parsed double number using the current culture information. - - - - - Globalized Parsing: Parse a float number - - First token of the number. - The parsed float number using the current culture information. - - - - - Calculates r^2, the square of the sample correlation coefficient between - the observed outcomes and the observed predictor values. - Not to be confused with R^2, the coefficient of determination, see . - - The modelled/predicted values - The observed/actual values - Squared Person product-momentum correlation coefficient. - - - - Calculates r, the sample correlation coefficient between the observed outcomes - and the observed predictor values. - - The modelled/predicted values - The observed/actual values - Person product-momentum correlation coefficient. - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The Standard Error of the regression - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The degrees of freedom by which the - number of samples is reduced for performing the Standard Error calculation - The Standard Error of the regression - - - - Calculates the R-Squared value, also known as coefficient of determination, - given some modelled and observed values. - - The values expected from the model. - The actual values obtained. - Coefficient of determination. - - - - Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). - - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed from the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Generate the frequencies corresponding to each index in frequency space. - The frequency space has a resolution of sampleRate/N. - Index 0 corresponds to the DC part, the following indices correspond to - the positive frequencies up to the Nyquist frequency (sampleRate/2), - followed by the negative frequencies wrapped around. - - Number of samples. - The sampling rate of the time-space data. - - - - Fourier Transform Convention - - - - - Inverse integrand exponent (forward: positive sign; inverse: negative sign). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling and common exponent (used in Maple). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] - - - - - Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] - - - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - - - Naive forward DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Hartley Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DHT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Hartley Transform Convention Options. - Corresponding time-space vector. - - - - Rescale FFT-the resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Rescale the iFFT-resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Naive generic DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Corresponding frequency-space vector. - - - - Hartley Transform Convention - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling. - - - - - Numerical Integration (Quadrature). - - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Numerical Contour Integration of a complex-valued function over a real variable,. - - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Analytic integration algorithm for smooth functions with no discontinuities - or derivative discontinuities and no poles inside the interval. - - - - - Maximum number of iterations, until the asked - maximum error is (likely to be) satisfied. - - - - - Approximate the integral by the double exponential transformation - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximate the integral by the double exponential transformation - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Compute the abscissa vector for a single level. - - The level to evaluate the abscissa vector for. - Abscissa Vector. - - - - Compute the weight vector for a single level. - - The level to evaluate the weight vector for. - Weight Vector. - - - - Precomputed abscissa vector per level. - - - - - Precomputed weight vector per level. - - - - - Getter for the order. - - - - - Getter that returns a clone of the array containing the Kronrod abscissas. - - - - - Getter that returns a clone of the array containing the Kronrod weights. - - - - - Getter that returns a clone of the array containing the Gauss weights. - - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth function to integrate - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth complex function to integrate, defined on the real axis. - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - - Initializes a new instance of the class. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - Gettter for the ith abscissa. - - Index of the ith abscissa. - The ith abscissa. - - - - Getter that returns a clone of the array containing the abscissas. - - - - - Getter for the ith weight. - - Index of the ith weight. - The ith weight. - - - - Getter that returns a clone of the array containing the weights. - - - - - Getter for the order. - - - - - Getter for the InvervalBegin. - - - - - Getter for the InvervalEnd. - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth function to integrate. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - Contains a method to compute the Gauss-Kronrod abscissas/weights. - - - - - Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - - - Computes the Gauss-Kronrod abscissas/weights and Gauss weights. - - Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. - Object containing the non-negative abscissas/weights, order. - - - - Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. - - - - - Return value and derivative of a Legendre series at given points. - - - - - Return value and derivative of a Legendre polynomial of order at given points. - - - - - Creates a Gauss-Kronrod point. - - - - - Getter for the GaussKronrodPoint. - - Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, and order. - - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Computes the Gauss-Legendre abscissas/weights. - See Pavel Holoborodko for a description of the algorithm. - - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. 1e-10 is usually fine. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Creates and maps a Gauss-Legendre point. - - - - - Getter for the GaussPoint. - - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Getter for the GaussPoint. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Contains the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - - Contains two GaussPoint. - - - - - Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. - - - Wikipedia - Trapezium Rule - - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, define don real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation algorithm for definite integrals by Simpson's rule. - - - - - Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Even number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Interpolation Factory. - - - - - Creates an interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Bulirsch Stoer rational interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a barycentric polynomial interpolation where the given sample points are equidistant. - - The sample points t, must be equidistant. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted - instead, which is more efficient. - - - - - Create a Neville polynomial interpolation based on arbitrary points. - If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. - Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a piecewise linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted - instead, which is more efficient. - - - - - Create piecewise log-linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted - instead, which is more efficient. - - - - - Create an piecewise natural cubic spline interpolation based on arbitrary points, - with zero secondary derivatives at the boundaries. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted - instead, which is more efficient. - - - - - Create an piecewise cubic Akima spline interpolation based on arbitrary points. - Akima splines are robust to outliers. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted - instead, which is more efficient. - - - - - Create a piecewise cubic Hermite spline interpolation based on arbitrary points - and their slopes/first derivative. - - The sample points t. - The sample point values x(t). - The slope at the sample points. Optimized for arrays. - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted - instead, which is more efficient. - - - - - Create a step-interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Barycentric Interpolation Algorithm. - - Supports neither differentiation nor integration. - - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - Barycentric weights (N), sorted ascendingly by x. - - - - Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. - - - - - Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. - - - - - Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. - - - - This algorithm supports neither differentiation nor integration. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Cubic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - third order spline coefficients (N) - - - - Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - - - - - Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - Akima splines are robust to outliers. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - - - - - Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, - and custom boundary/termination conditions. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - - - - - Create a natural cubic spline interpolation from a set of (x,y) value pairs - and zero second derivatives at the two boundaries, sorted ascendingly by x. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - - - - - Three-Point Differentiation Helper. - - Sample Points t. - Sample Values x(t). - Index of the point of the differentiation. - Index of the first sample. - Index of the second sample. - Index of the third sample. - The derivative approximation. - - - - Tridiagonal Solve Helper. - - The a-vector[n]. - The b-vector[n], will be modified by this function. - The c-vector[n]. - The d-vector[n], will be modified by this function. - The x-vector[n] - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Interpolation within the range of a discrete set of known data points. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Piece-wise Linear Interpolation. - - Supports both differentiation and integration. - - - Sample points (N+1), sorted ascending - Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients - Slopes (N) at the sample points (first order coefficients): N - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Piece-wise Log-Linear Interpolation - - This algorithm supports differentiation, not integration. - - - - Internal Spline Interpolation - - - - Sample points (N), sorted ascending - Natural logarithm of the sample values (N) at the corresponding points - - - - Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Lagrange Polynomial Interpolation using Neville's Algorithm. - - - - This algorithm supports differentiation, but doesn't support integration. - - - When working with equidistant or Chebyshev sample points it is - recommended to use the barycentric algorithms specialized for - these cases instead of this arbitrary Neville algorithm. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Quadratic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Left and right boundary conditions. - - - - - Natural Boundary (Zero second derivative). - - - - - Parabolically Terminated boundary. - - - - - Fixed first derivative at the boundary. - - - - - Fixed second derivative at the boundary. - - - - - A step function where the start of each segment is included, and the last segment is open-ended. - Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. - The domain of the function is all real numbers, such that y = 0 where x <. - - Supports both differentiation and integration. - - - Sample points (N), sorted ascending - Samples values (N) of each segment starting at the corresponding sample point. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t. - - - - - Wraps an interpolation with a transformation of the interpolated values. - - Neither differentiation nor integration is supported. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use, - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a double dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - Matrix V is encoded in the property EigenVectors in the way that: - - column corresponding to real eigenvalue represents real eigenvector, - - columns corresponding to the pair of complex conjugate eigenvalues - lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - double version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - double version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiply this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply this one by. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a float dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - float version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a float sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - float version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the Frobenius norm of this matrix. - The Frobenius norm of this matrix. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex value z1 - Complex value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex32 value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex32 value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex32 value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex32 dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex32 value z1 - Complex32 value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - Supported data types are double, single, , and . - - - - Gets the lower triangular form of the Cholesky matrix. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - Supported data types are double, single, , and . - - - - Gets or sets a value indicating whether matrix is symmetric or not - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Gets or sets the eigen values (λ) of matrix in ascending value. - - - - - Gets or sets eigenvectors. - - - - - Gets or sets the block diagonal eigenvalue matrix. - - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - Supported data types are double, single, , and . - - - - Classes that solves a system of linear equations, AX = B. - - Supported data types are double, single, , and . - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, Ax = b - - The right hand side vector, b. - The left hand side Vector, x. - - - - Solves a system of linear equations, Ax = b. - - The right hand side vector, b. - The left hand side Matrix>, x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - Supported data types are double, single, , and . - - - - Gets the lower triangular factor. - - - - - Gets the upper triangular factor. - - - - - Gets the permutation applied to LU factorization. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - The type of QR factorization go perform. - - - - - Compute the full QR factorization of a matrix. - - - - - Compute the thin QR factorization of a matrix. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - Supported data types are double, single, , and . - - - - Gets or sets orthogonal Q matrix - - - - - Gets the upper triangular factor R. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - Supported data types are double, single, , and . - - - Indicating whether U and VT matrices have been computed during SVD factorization. - - - - Gets the singular values (Σ) of matrix in ascending value. - - - - - Gets the left singular vectors (U - m-by-m unitary matrix) - - - - - Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) - - - - - Returns the singular values as a diagonal . - - The singular values as a diagonal . - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - Supported data types are double, single, , and . - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - - - - The value of 1.0. - - - - - The value of 0.0. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar denominator to use. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar numerator to use. - The matrix to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. - - The exponent matrix to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Adds a scalar to each element of the matrix. - - The scalar to add. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds a scalar to each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix. - - The scalar to subtract. - A new matrix containing the subtraction of this matrix and the scalar. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts each element of the matrix from a scalar. - - The scalar to subtract from. - A new matrix containing the subtraction of the scalar and this matrix. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of this matrix with a scalar. - - The scalar to multiply with. - The result of the multiplication. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides each element of this matrix with a scalar. - - The scalar to divide with. - The result of the division. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides a scalar by each element of the matrix. - - The scalar to divide. - The result of the division. - - - - Divides a scalar by each element of the matrix and places results into the result matrix. - - The scalar to divide. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.ColumnCount != rightSide.Count. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.RowCount. - If this.ColumnCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ). - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.Rows. - If the result matrix's dimensions are not the this.Rows x other.Columns. - - - - Multiplies this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.Rows. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with the conjugate transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the conjugate transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Raises this square matrix to a positive integer exponent and places the results into the result matrix. - - The positive integer exponent to raise the matrix to. - The result of the power. - - - - Multiplies this square matrix with another matrix and returns the result. - - The positive integer exponent to raise the matrix to. - - - - Negate each element of this matrix. - - A matrix containing the negated values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - if the result matrix's dimensions are not the same as this matrix. - - - - Complex conjugate each element of this matrix. - - A matrix containing the conjugated values. - - - - Complex conjugate each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - if the result matrix's dimensions are not the same as this matrix. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Pointwise multiplies this matrix with another matrix. - - The matrix to pointwise multiply with this one. - If this matrix and are not the same size. - A new matrix that is the pointwise multiplication of this matrix and . - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise divide this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - A new matrix that is the pointwise division of this matrix and . - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise modulus. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise remainder. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Helper function to apply a unary function to a matrix. The function - f modifies the matrix given to it in place. Before its - called, a copy of the 'this' matrix is first created, then passed to - f. The copy is then returned as the result - - Function which takes a matrix, modifies it in place and returns void - New instance of matrix which is the result - - - - Helper function to apply a unary function which modifies a matrix - in place. - - Function which takes a matrix, modifies it in place and returns void - The matrix to be passed to f and where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two matrices - and modifies the latter in place. A copy of the "this" matrix is - first made and then passed to f together with the other matrix. The - copy is then returned as the result - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The resulting matrix - If this matrix and are not the same dimension. - - - - Helper function to apply a binary function which takes two matrices - and modifies the second one in place - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The matrix to store the result. - The resulting matrix - If this matrix and are not the same dimension. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - - - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - The other matrix 'y' - The matrix with the result and 'x' - - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Calculates the rank of the matrix. - - effective numerical rank, obtained from SVD - - - - Calculates the nullity of the matrix. - - effective numerical nullity, obtained from SVD - - - Calculates the condition number of this matrix. - The condition number of the matrix. - The condition number is calculated using singular value decomposition. - - - Computes the determinant of this matrix. - The determinant of this matrix. - - - - Computes an orthonormal basis for the null space of this matrix, - also known as the kernel of the corresponding matrix transformation. - - - - - Computes an orthonormal basis for the column space of this matrix, - also known as the range or image of the corresponding matrix transformation. - - - - Computes the inverse of this matrix. - The inverse of this matrix. - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. - In a later release, it will be replaced with a sparse implementation. - - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Returns a string that describes the type, dimensions and shape of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes this matrix. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Matrix class. - - - - - Gets the raw matrix data storage. - - - - - Gets the number of columns. - - The number of columns. - - - - Gets the number of rows. - - The number of rows. - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - - - - Sets the value of the given element without range checking. - - - The row of the element. - - - The column of the element. - - - The value to set the element to. - - - - - Sets all values to zero. - - - - - Sets all values of a row to zero. - - - - - Sets all values of a column to zero. - - - - - Sets all values for all of the chosen rows to zero. - - - - - Sets all values for all of the chosen columns to zero. - - - - - Sets all values of a sub-matrix to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Creates a clone of this instance. - - - A clone of the instance. - - - - - Copies the elements of this matrix to the given matrix. - - - The matrix to copy values into. - - - If target is . - - - If this and the target matrix do not have the same dimensions.. - - - - - Copies a row into an Vector. - - The row to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of rows. - - - - Copies a row into to the given Vector. - - The row to copy. - The Vector to copy the row into. - If the result vector is . - If is negative, - or greater than or equal to the number of rows. - If this.Columns != result.Count. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of rows. - is negative, - or greater than or equal to the number of columns. - (columnIndex + length) >= Columns. - If is not positive. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Copies a column into a new Vector>. - - The column to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of columns. - - - - Copies a column into to the given Vector. - - The column to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If this.Rows != result.Count. - - - - Copies the requested column elements into a new Vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of columns. - is negative, - or greater than or equal to the number of rows. - (rowIndex + length) >= Rows. - - If is not positive. - - - - Copies the requested column elements into the given vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Returns the elements of the diagonal in a Vector. - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a new matrix and inserts the given column at the given index. - - The index of where to insert the column. - The column to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of columns. - If the size of != the number of rows. - - - - Creates a new matrix with the given column removed. - - The index of the column to remove. - A new matrix without the chosen column. - If is < zero or >= the number of columns. - - - - Copies the values of the given Vector to the specified column. - - The column to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given Vector to the specified sub-column. - - The column to copy the values to. - The row to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given array to the specified column. - - The column to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - If the size of does not - equal the number of rows of this Matrix. - - - - Creates a new matrix and inserts the given row at the given index. - - The index of where to insert the row. - The row to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of rows. - If the size of != the number of columns. - - - - Creates a new matrix with the given row removed. - - The index of the row to remove. - A new matrix without the chosen row. - If is < zero or >= the number of rows. - - - - Copies the values of the given Vector to the specified row. - - The row to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given Vector to the specified sub-row. - - The row to copy the values to. - The column to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given array to the specified row. - - The row to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The column to start copying to. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The number of rows to copy. Must be positive. - The column to start copying to. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The row of the sub-matrix to start copying from. - The number of rows to copy. Must be positive. - The column to start copying to. - The column of the sub-matrix to start copying from. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of the given Vector to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Returns the transpose of this matrix. - - The transpose of this matrix. - - - - Puts the transpose of this matrix into the result matrix. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - - - - Concatenates this matrix with the given matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Concatenates this matrix with the given matrix and places the result into the result matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, - where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. - The values of off the off diagonal matrices/blocks are set to zero. - - The lower, right matrix. - If lower is . - the combined matrix - - - - - - Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. - - The lower, right matrix. - The combined matrix - If lower is . - If the result matrix is . - If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). - - - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Returns this matrix as a multidimensional array. - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - - A multidimensional containing the values of this matrix. - - - - Returns the matrix's elements as an array with the data laid out column by column (column major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the matrix's elements as an array with the data laid row by row (row major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns this matrix as array of row arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns this matrix as array of column arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the matrix will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowArrays instead if you always need an independent array. - - - - - Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnArrays instead if you always need an independent array. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. - - The column to start enumerating over. - The number of columns to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. - - The column to start enumerating over. - The number of columns to enumerating over. - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. - - The row to start enumerating over. - The number of rows to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. - - The row to start enumerating over. - The number of rows to enumerating over. - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Applies a function to each value of this matrix and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value with its result. - The row and column indices of each value (zero-based) are passed as first arguments to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each row. - - - - - For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each column. - - - - - Applies a function f to each row vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Applies a function f to each column vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Reduces all row vectors by applying a function between two of them, until only a single vector is left. - - - - - Reduces all column vectors by applying a function between two of them, until only a single vector is left. - - - - - Applies a function to each value pair of two matrices and replaces the value in the result vector. - - - - - Applies a function to each value pair of two matrices and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two matrices and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two matrices of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to add. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to add. - The right matrix to add. - The result of the addition. - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts a scalar from each element of a matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to subtract. - The scalar value to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts each element of a matrix from a scalar. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Divides a scalar with a matrix. - - The scalar to divide. - The matrix. - The result of the division. - If is . - - - - Divides a matrix with a scalar. - - The matrix to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of the matrix of the given divisor. - - The matrix whose elements we want to compute the modulus of. - The divisor to use. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the matrix. - - The dividend we want to compute the modulus of. - The matrix whose elements we want to use as divisor. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two matrices. - - The matrix whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a matrix pointwise - - The input matrix - - - - - Computes the exponential of a matrix pointwise - - The input matrix - - - - - Computes the log of a matrix pointwise - - The input matrix - - - - - Computes the log10 of a matrix pointwise - - The input matrix - - - - - Computes the sin of a matrix pointwise - - The input matrix - - - - - Computes the cos of a matrix pointwise - - The input matrix - - - - - Computes the tan of a matrix pointwise - - The input matrix - - - - - Computes the asin of a matrix pointwise - - The input matrix - - - - - Computes the acos of a matrix pointwise - - The input matrix - - - - - Computes the atan of a matrix pointwise - - The input matrix - - - - - Computes the sinh of a matrix pointwise - - The input matrix - - - - - Computes the cosh of a matrix pointwise - - The input matrix - - - - - Computes the tanh of a matrix pointwise - - The input matrix - - - - - Computes the absolute value of a matrix pointwise - - The input matrix - - - - - Computes the floor of a matrix pointwise - - The input matrix - - - - - Computes the ceiling of a matrix pointwise - - The input matrix - - - - - Computes the rounded value of a matrix pointwise - - The input matrix - - - - - Computes the Cholesky decomposition for a matrix. - - The Cholesky decomposition object. - - - - Computes the LU decomposition for a matrix. - - The LU decomposition object. - - - - Computes the QR decomposition for a matrix. - - The type of QR factorization to perform. - The QR decomposition object. - - - - Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. - - The QR decomposition object. - - - - Computes the SVD decomposition for a matrix. - - Compute the singular U and VT vectors or not. - The SVD decomposition object. - - - - Computes the EVD decomposition for a matrix. - - The EVD decomposition object. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The result matrix X. - - - - Converts a matrix to single precision. - - - - - Converts a matrix to double precision. - - - - - Converts a matrix to single precision complex numbers. - - - - - Converts a matrix to double precision complex numbers. - - - - - Gets a single precision complex matrix with the real parts from the given matrix. - - - - - Gets a double precision complex matrix with the real parts from the given matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Existing data may not be all zeros, so clearing may be necessary - if not all of it will be overwritten anyway. - - - - - If existing data is assumed to be all zeros already, - clearing it may be skipped if applicable. - - - - - Allow skipping zero entries (without enforcing skipping them). - When enumerating sparse matrices this can significantly speed up operations. - - - - - Force applying the operation to all fields even if they are zero. - - - - - It is not known yet whether a matrix is symmetric or not. - - - - - A matrix is symmetric - - - - - A matrix is Hermitian (conjugate symmetric). - - - - - A matrix is not symmetric - - - - - Defines an that uses a cancellation token as stop criterion. - - - - - Initializes a new instance of the class. - - - - - Initializes a new instance of the class. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Stop criterion that delegates the status determination to a delegate. - - - - - Create a new instance of this criterion with a custom implementation. - - Custom implementation with the same signature and semantics as the DetermineStatus method. - - - - Determines the status of the iterative calculation by delegating it to the provided delegate. - Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - - - - Clones this criterion and its settings. - - - - - Monitors an iterative calculation for signs of divergence. - - - - - The maximum relative increase the residual may experience without triggering a divergence warning. - - - - - The number of iterations over which a residual increase should be tracked before issuing a divergence warning. - - - - - The status of the calculation - - - - - The array that holds the tracking information. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified maximum - relative increase and the specified minimum number of tracking iterations. - - The maximum relative increase that the residual may experience before a divergence warning is issued. - The minimum number of iterations over which the residual must grow before a divergence warning is issued. - - - - Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. - - Thrown if the Maximum is set to zero or below. - - - - Gets or sets the minimum number of iterations over which the residual must grow before - issuing a divergence warning. - - Thrown if the value is set to less than one. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Detect if solution is diverging - - true if diverging, otherwise false - - - - Gets required history Length - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Defines an that monitors residuals for NaN's. - - - - - The status of the calculation - - - - - The iteration number of the last iteration. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - The base interface for classes that provide stop criteria for iterative calculations. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current IIterationStopCriterion. Status is set to Status field of current object. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - is not a legal value. Status should be set in implementation. - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - To implementers: Invoking this method should not clear the user defined - property values, only the state that is used to track the progress of the - calculation. - - - - Defines the interface for classes that solve the matrix equation Ax = b in - an iterative manner. - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Defines the interface for objects that can create an iterative solver with - specific settings. This interface is used to pass iterative solver creation - setup information around. - - - - - Gets the type of the solver that will be created by this setup object. - - - - - Gets type of preconditioner, if any, that will be created by this setup object. - - - - - Creates the iterative solver to be used. - - - - - Creates the preconditioner to be used by default (can be overwritten). - - - - - Gets the relative speed of the solver. - - Returns a value between 0 and 1, inclusive. - - - - Gets the relative reliability of the solver. - - Returns a value between 0 and 1 inclusive. - - - - The base interface for preconditioner classes. - - - - Preconditioners are used by iterative solvers to improve the convergence - speed of the solving process. Increase in convergence speed - is related to the number of iterations necessary to get a converged solution. - So while in general the use of a preconditioner means that the iterative - solver will perform fewer iterations it does not guarantee that the actual - solution time decreases given that some preconditioners can be expensive to - setup and run. - - - Note that in general changes to the matrix will invalidate the preconditioner - if the changes occur after creating the preconditioner. - - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix on which the preconditioner is based. - - - - Approximates the solution to the matrix equation Mx = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Defines an that monitors the numbers of iteration - steps as stop criterion. - - - - - The default value for the maximum number of iterations the process is allowed - to perform. - - - - - The maximum number of iterations the calculation is allowed to perform. - - - - - The status of the calculation - - - - - Initializes a new instance of the class with the default maximum - number of iterations. - - - - - Initializes a new instance of the class with the specified maximum - number of iterations. - - The maximum number of iterations the calculation is allowed to perform. - - - - Gets or sets the maximum number of iterations the calculation is allowed to perform. - - Thrown if the Maximum is set to a negative value. - - - - Returns the maximum number of iterations to the default. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Iterative Calculation Status - - - - - An iterator that is used to check if an iterative calculation should continue or stop. - - - - - The collection that holds all the stop criteria and the flag indicating if they should be added - to the child iterators. - - - - - The status of the iterator. - - - - - Initializes a new instance of the class with the default stop criteria. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Gets the current calculation status. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual iterators may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Indicates to the iterator that the iterative process has been cancelled. - - - Does not reset the stop-criteria. - - - - - Resets the to the pre-calculation state. - - - - - Creates a deep clone of the current iterator. - - The deep clone of the current iterator. - - - - Defines an that monitors residuals as stop criterion. - - - - - The maximum value for the residual below which the calculation is considered converged. - - - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - The status of the calculation - - - - - The number of iterations since the residuals got below the maximum. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified - maximum residual and minimum number of iterations. - - - The maximum value for the residual below which the calculation is considered converged. - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - Gets or sets the maximum value for the residual below which the calculation is considered - converged. - - Thrown if the Maximum is set to a negative value. - - - - Gets or sets the minimum number of iterations for which the residual has to be - below the maximum before the calculation is considered converged. - - Thrown if the BelowMaximumFor is set to a value less than 1. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Loads the available objects from the specified assembly. - - The assembly which will be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The type in the assembly which should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The of the assembly that should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - - - - A unit preconditioner. This preconditioner does not actually do anything - it is only used when running an without - a preconditioner. - - - - - The coefficient matrix on which this preconditioner operates. - Is used to check dimensions on the different vectors that are processed. - - - - - Initializes the preconditioner and loads the internal data structures. - - - The matrix upon which the preconditioner is based. - - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - If and do not have the same size. - - - - or - - - - If the size of is different the number of rows of the coefficient matrix. - - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Evaluate the row and column at a specific data index. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - - The array containing the row indices of the existing rows. Element "i" of the array gives the index of the - element in the array that is first non-zero element in a row "i". - The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always - given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. - - - - - An array containing the column indices of the non-zero values. Element "j" of the array - is the number of the column in matrix that contains the j-th value in the array. - - - - - Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values - array using the row-major storage mapping described in a compressed sparse row (CSR) format. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Delete value from internal storage - - Index of value in nonZeroValues array - Row number of matrix - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Find item Index in nonZeroValues array - - Matrix row index - Matrix column index - Item index - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Array that contains the indices of the non-zero values. - - - - - Array that contains the non-zero elements of the vector. - - - - - Gets the number of non-zero elements in the vector. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the vector storage format is dense. - - - - - Gets or sets the value at the given index, with range checking. - - - The index of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - The index of the element. - The requested element. - Not range-checked. - - - - Sets the element without range checking. - - The index of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - - Defines the generic class for Vector classes. - - Supported data types are double, single, , and . - - - - The zero value for type T. - - - - - The value of 1.0 for type T. - - - - - Negates vector and save result to - - Target vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar denominator to use. - The vector to store the result of the division. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar numerator to use. - The vector to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Adds a scalar to each element of the vector. - - The scalar to add. - A copy of the vector with the scalar added. - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - If this vector and are not the same size. - - - - Adds another vector to this vector. - - The vector to add to this one. - A new vector containing the sum of both vectors. - If this vector and are not the same size. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Subtracts a scalar from each element of the vector. - - The scalar to subtract. - A new vector containing the subtraction of this vector and the scalar. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Subtracts each element of the vector from a scalar. - - The scalar to subtract from. - A new vector containing the subtraction of the scalar and this vector. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Returns a negated vector. - - The negated vector. - Added as an alternative to the unary negation operator. - - - - Negates vector and save result to - - Target vector - - - - Subtracts another vector from this vector. - - The vector to subtract from this one. - A new vector containing the subtraction of the two vectors. - If this vector and are not the same size. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Return vector with complex conjugate values of the source vector - - Conjugated vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector. - - The scalar to multiply. - A new vector that is the multiplication of the vector and the scalar. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - If this vector and are not the same size. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - If is not of the same size. - - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - If is not of the same size. - If is . - - - - - Divides each element of the vector by a scalar. - - The scalar to divide with. - A new vector that is the division of the vector and the scalar. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar to divide with. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Divides a scalar by each element of the vector. - - The scalar to divide. - A new vector that is the division of the vector and the scalar. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector. - - The vector to pointwise multiply with this one. - A new vector which is the pointwise multiplication of the two vectors. - If this vector and are not the same size. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector. - - The pointwise denominator vector to use. - A new vector which is the pointwise division of the two vectors. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise division. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The matrix to store the result into. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - The vector to store the result into. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise modulus. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise remainder. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Helper function to apply a unary function to a vector. The function - f modifies the vector given to it in place. Before its - called, a copy of the 'this' vector with the same dimension is - first created, then passed to f. The copy is returned as the result - - Function which takes a vector, modifies it in place and returns void - New instance of vector which is the result - - - - Helper function to apply a unary function which modifies a vector - in place. - - Function which takes a vector, modifies it in place and returns void - The vector where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes a scalar and - a vector and modifies the latter in place. A copy of the "this" - vector is therefore first made and then passed to f together with - the scalar argument. The copy is then returned as the result - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The resulting vector - - - - Helper function to apply a binary function which takes a scalar and - a vector, modifies the latter in place and returns void. - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The vector where the result will be placed - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the latter in place. A copy of the "this" vector is - first made and then passed to f together with the other vector. The - copy is then returned as the result - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the second one in place - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - The vector to store the result - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. - - The other vector - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = (sum(abs(this[i])^p))^(1/p) - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - The p value. - This vector normalized to a unit vector with respect to the p-norm. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the value of maximum element. - - The value of maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the value of the minimum element. - - The value of the minimum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Computes the sum of the absolute value of the vector's elements. - - The sum of the absolute value of the vector's elements. - - - - Indicates whether the current object is equal to another object of the same type. - - An object to compare with this object. - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Returns an enumerator that iterates through the collection. - - - A that can be used to iterate through the collection. - - - - - Returns an enumerator that iterates through a collection. - - - An object that can be used to iterate through the collection. - - - - - Returns a string that describes the type, dimensions and shape of this vector. - - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Character to use to print if there is not enough space to print all entries. Typical value: "..". - Character to use to separate two columns on a line. Typical value: " " (2 spaces). - Character to use to separate two rows/lines. Typical value: Environment.NewLine. - Function to provide a string for any given entry value. - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that represents the content of this vector, column by column. - - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector, column by column and with a type header. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Vector class. - - - - - Gets the raw vector data storage. - - - - - Gets the length or number of dimensions of this vector. - - - - Gets or sets the value at the given . - The index of the value to get or set. - The value of the vector at the given . - If is negative or - greater than the size of the vector. - - - Gets the value at the given without range checking.. - The index of the value to get or set. - The value of the vector at the given . - - - Sets the at the given without range checking.. - The index of the value to get or set. - The value to set. - - - - Resets all values to zero. - - - - - Sets all values of a subvector to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Returns a deep-copy clone of the vector. - - A deep-copy clone of the vector. - - - - Set the values of this vector to the given values. - - The array containing the values to use. - If is . - If is not the same size as this vector. - - - - Copies the values of this vector into the target vector. - - The vector to copy elements into. - If is . - If is not the same size as this vector. - - - - Creates a vector containing specified elements. - - The first element to begin copying from. - The number of elements to copy. - A vector containing a copy of the specified elements. - If is not positive or - greater than or equal to the size of the vector. - If + is greater than or equal to the size of the vector. - - If is not positive. - - - - Copies the values of a given vector into a region in this vector. - - The field to start copying to - The number of fields to copy. Must be positive. - The sub-vector to copy from. - If is - - - - Copies the requested elements from this vector to another. - - The vector to copy the elements to. - The element to start copying from. - The element to start copying to. - The number of elements to copy. - - - - Returns the data contained in the vector as an array. - The returned array will be independent from this vector. - A new memory block will be allocated for the array. - - The vector's data as an array. - - - - Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the vector will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Create a matrix based on this vector in column form (one single column). - - - This vector as a column matrix. - - - - - Create a matrix based on this vector in row form (one single row). - - - This vector as a row matrix. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Applies a function to each value of this vector and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value with its result. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value pair of two vectors and replaces the value in the result vector. - - - - - Applies a function to each value pair of two vectors and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two vectors and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two vectors of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Vector containing the same values of . - - This method is included for completeness. - The vector to get the values from. - A vector containing the same values as . - If is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Adds a scalar to each element of a vector. - - The vector to add to. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of a vector. - - The scalar value to add. - The vector to add to. - The result of the addition. - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of a vector. - - The vector to subtract from. - The scalar value to subtract. - The result of the subtraction. - If is . - - - - Subtracts each element of a vector from a scalar. - - The scalar value to subtract from. - The vector to subtract. - The result of the subtraction. - If is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a scalar with a vector. - - The scalar to divide. - The vector. - The result of the division. - If is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Pointwise divides two Vectors. - - The vector to divide. - The other vector. - The result of the division. - If and are not the same size. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the vector. - - The dividend we want to compute the remainder of. - The vector whose elements we want to use as divisor. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two vectors. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a vector pointwise - - The input vector - - - - - Computes the exponential of a vector pointwise - - The input vector - - - - - Computes the log of a vector pointwise - - The input vector - - - - - Computes the log10 of a vector pointwise - - The input vector - - - - - Computes the sin of a vector pointwise - - The input vector - - - - - Computes the cos of a vector pointwise - - The input vector - - - - - Computes the tan of a vector pointwise - - The input vector - - - - - Computes the asin of a vector pointwise - - The input vector - - - - - Computes the acos of a vector pointwise - - The input vector - - - - - Computes the atan of a vector pointwise - - The input vector - - - - - Computes the sinh of a vector pointwise - - The input vector - - - - - Computes the cosh of a vector pointwise - - The input vector - - - - - Computes the tanh of a vector pointwise - - The input vector - - - - - Computes the absolute value of a vector pointwise - - The input vector - - - - - Computes the floor of a vector pointwise - - The input vector - - - - - Computes the ceiling of a vector pointwise - - The input vector - - - - - Computes the rounded value of a vector pointwise - - The input vector - - - - - Converts a vector to single precision. - - - - - Converts a vector to double precision. - - - - - Converts a vector to single precision complex numbers. - - - - - Converts a vector to double precision complex numbers. - - - - - Gets a single precision complex vector with the real parts from the given vector. - - - - - Gets a double precision complex vector with the real parts from the given vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response vector Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response matrix Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor-Response samples as tuples - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor-Response samples as tuples - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response matrix Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Weighted Linear Regression using normal equations. - - List of sample vectors (predictor) together with their response. - List of weights, one for each sample. - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Locally-Weighted Linear Regression using normal equations. - - - - - Locally-Weighted Linear Regression using normal equations. - - - - - First Order AB method(same as Forward Euler) - - Initial value - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Second Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Third Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Fourth Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - ODE Solver Algorithms - - - - - Second Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Second Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems - http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz - - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The lower bound - The upper bound - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems - - - - - Creates BFGS minimizer - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - - Creates a base class for BFGS minimization - - - - - Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima - See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm - Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp - - - - - Finds a minimum of a function by the BFGS quasi-Newton method - This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian - - An initial guess - Evaluates the function at a point - Evaluates the gradient of the function at a point - The minimum found - - - - Objective function with a frozen evaluation that must not be changed from the outside. - - - - Create a new unevaluated and independent copy of this objective function - - - - Objective function with a mutable evaluation. - - - - Create a new independent copy of this objective function, evaluated at the same point. - - - - Get the y-values of the observations. - - - - - Get the values of the weights for the observations. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the values of the parameters. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector. G = J'(y - f(x; p)) - - - - - Get the approximated Hessian matrix. H = J'J - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Get the degree of freedom. - - - - - The scale factor for initial mu - - - - - Non-linear least square fitting by the Levenberg-Marduardt algorithm. - - The objective function, including model, observations, and parameter bounds. - The initial guess values. - The initial damping parameter of mu. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for L2 norm of the residuals. - The max iterations. - The result of the Levenberg-Marquardt minimization - - - - Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm - - - - - - Creates L-BFGS minimizer - - Numbers of gradients and steps to store. - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe - Conditions are - i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) - ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) - where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. - - Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - references: - http://en.wikipedia.org/wiki/Wolfe_conditions - http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - - - Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - - - - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - The upper bound - - - - Creates a base class for minimization - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. - Called fminsearch() in Matlab. A description of the algorithm can be found at - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - or - https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method - - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Evaluate the objective function at each vertex to create a corresponding - list of error values for each vertex - - - - - - - - Check whether the points in the error profile have so little range that we - consider ourselves to have converged - - - - - - - - - Examine all error values to determine the ErrorProfile - - - - - - - Construct an initial simplex, given starting guesses for the constants, and - initial step sizes for each dimension - - - - - - - Test a scaling operation of the high point, and replace it if it is an improvement - - - - - - - - - - - Contract the simplex uniformly around the lowest point - - - - - - - - - Compute the centroid of all points except the worst - - - - - - - - The value of the constant - - - - - Returns the best fit parameters. - - - - - Returns the standard errors of the corresponding parameters - - - - - Returns the y-values of the fitted model that correspond to the independent values. - - - - - Returns the covariance matrix at minimizing point. - - - - - Returns the correlation matrix at minimizing point. - - - - - The stopping threshold for the function value or L2 norm of the residuals. - - - - - The stopping threshold for L2 norm of the change of the parameters. - - - - - The stopping threshold for infinity norm of the gradient. - - - - - The maximum number of iterations. - - - - - The lower bound of the parameters. - - - - - The upper bound of the parameters. - - - - - The scale factors for the parameters. - - - - - Objective function where neither Gradient nor Hessian is available. - - - - - Objective function where the Gradient is available. Greedy evaluation. - - - - - Objective function where the Gradient is available. Lazy evaluation. - - - - - Objective function where the Hessian is available. Greedy evaluation. - - - - - Objective function where the Hessian is available. Lazy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Greedy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Lazy evaluation. - - - - - Objective function where neither first nor second derivative is available. - - - - - Objective function where the first derivative is available. - - - - - Objective function where the first and second derivatives are available. - - - - - objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective function with a user supplied jacobian for nonlinear least squares regression. - - - - - Objective function for nonlinear least squares regression. - The numerical jacobian with accuracy order is used. - - - - - Adapts an objective function with only value implemented - to provide a gradient as well. Gradient calculation is - done using the finite difference method, specifically - forward differences. - - For each gradient computed, the algorithm requires an - additional number of function evaluations equal to the - functions's number of input parameters. - - - - - Set or get the values of the independent variable. - - - - - Set or get the values of the observations. - - - - - Set or get the values of the weights for the observations. - - - - - Get whether parameters are fixed or free. - - - - - Get the number of observations. - - - - - Get the number of unknown parameters. - - - - - Get the degree of freedom - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Set or get the values of the parameters. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector of x and p. - - - - - Get the Hessian matrix of x and p, J'WJ - - - - - Set observed data to fit. - - - - - Set parameters and bounds. - - The initial values of parameters. - The list to the parameters fix or free. - - - - Non-linear least square fitting by the trust region dogleg algorithm. - - - - - The trust region subproblem. - - - - - The stopping threshold for the trust region radius. - - - - - Non-linear least square fitting by the trust-region algorithm. - - The objective model, including function, jacobian, observations, and parameter bounds. - The subproblem - The initial guess values. - The stopping threshold for L2 norm of the residuals. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for trust region radius - The max iterations. - - - - - Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. - - - - - Class to represent a permutation for a subset of the natural numbers. - - - - - Entry _indices[i] represents the location to which i is permuted to. - - - - - Initializes a new instance of the Permutation class. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - - - - Gets the number of elements this permutation is over. - - - - - Computes where permutes too. - - The index to permute from. - The index which is permuted to. - - - - Computes the inverse of the permutation. - - The inverse of the permutation. - - - - Construct an array from a sequence of inversions. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - The set of inversions to construct the permutation from. - A permutation generated from a sequence of inversions. - - - - Construct a sequence of inversions from the permutation. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - A sequence of inversions. - - - - Checks whether the array represents a proper permutation. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - True if represents a proper permutation, false otherwise. - - - - A single-variable polynomial with real-valued coefficients and non-negative exponents. - - - - - The coefficients of the polynomial in a - - - - - Only needed for the ToString method - - - - - Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. - The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. - - - - - Create a zero-polynomial with a coefficient array of the given length. - An array of length N can support polynomials of a degree of at most N-1. - - Length of the coefficient array - - - - Create a zero-polynomial - - - - - Create a constant polynomial. - Example: 3.0 -> "p : x -> 3.0" - - The coefficient of the "x^0" monomial. - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as array - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as enumerable - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k - - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Calculates the complex roots of the Polynomial by eigenvalue decomposition - - a vector of complex numbers with the roots - - - - Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. - - Eigenvalue matrix A - This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix - - - - Addition of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a polynomial and a scalar. - - - - - Subtraction of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a scalar from a polynomial. - - - - - Addition of a polynomial from a scalar. - - - - - Negation of a polynomial. - - - - - Multiplies a polynomial by a polynomial (convolution) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Scales a polynomial by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Scales a polynomial by division by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r - - Left polynomial - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Point-wise division of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Point-wise multiplication of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Division of two polynomials returning the quotient-with-remainder of the two polynomials given - - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Addition of two Polynomials (piecewise) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Scalar value - Polynomial - Resulting Polynomial - - - - Subtraction of two polynomial. - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Subtracts a scalar from a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - Subtracts a polynomial from a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Negates a polynomial. - - Polynomial - Resulting Polynomial - - - - Multiplies a polynomial by a polynomial (convolution). - - Left polynomial - Right polynomial - resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Polynomial - Scalar value - Resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Divides a polynomial by scalar value. - - Polynomial - Scalar value - Resulting Polynomial - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Utilities for working with floating point numbers. - - - - Useful links: - - - http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic - - - http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon - - - - - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The relative accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Checks if a given double values is finite, i.e. neither NaN nor inifnity - - The value to be checked fo finitenes. - - - - The number of binary digits used to represent the binary number for a double precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - The number of binary digits used to represent the binary number for a single precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - On a standard machine this is equivalent to `DoublePrecision`. - - - - - Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - On a standard machine this is equivalent to `PositiveDoublePrecision`. - - - - - The number of significant decimal places of double-precision floating numbers (64 bit). - - - - - The number of significant decimal places of single-precision floating numbers (32 bit). - - - - - Value representing 10 * 2^(-53) = 1.11022302462516E-15 - - - - - Value representing 10 * 2^(-24) = 5.96046447753906E-07 - - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. - - The value. - The value of the number. - - - - Returns a 'directional' long value. This is a long value which acts the same as a double, - e.g. a negative double value will return a negative double value starting at 0 and going - more negative as the double value gets more negative. - - The input double value. - A long value which is roughly the equivalent of the double value. - - - - Returns a 'directional' int value. This is a int value which acts the same as a float, - e.g. a negative float value will return a negative int value starting at 0 and going - more negative as the float value gets more negative. - - The input float value. - An int value which is roughly the equivalent of the double value. - - - - Increments a floating point number to the next bigger number representable by the data type. - - The value which needs to be incremented. - How many times the number should be incremented. - - The incrementation step length depends on the provided value. - Increment(double.MaxValue) will return positive infinity. - - The next larger floating point value. - - - - Decrements a floating point number to the next smaller number representable by the data type. - - The value which should be decremented. - How many times the number should be decremented. - - The decrementation step length depends on the provided value. - Decrement(double.MinValue) will return negative infinity. - - The next smaller floating point value. - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The absolute threshold for to consider it as zero. - Zero if || is smaller than , otherwise. - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero. - - The real number to coerce to zero, if it is almost zero. - Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. - - - - Determines the range of floating point numbers that will match the specified value with the given tolerance. - - The value. - The ulps difference. - - Thrown if is smaller than zero. - - Tuple of the bottom and top range ends. - - - - Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is - always bigger than the value) - - The value. - The ulps difference. - The maximum floating point number which is larger than the given . - - - - Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is - always smaller than the value) - - The value. - The ulps difference. - The minimum floating point number which is smaller than the given . - - - - Determines the range of ulps that will match the specified value with the given tolerance. - - The value. - The relative difference. - - Thrown if is smaller than zero. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Tuple with the number of ULPS between the value and the value - relativeDifference as first, - and the number of ULPS between the value and the value + relativeDifference as second value. - - - - - Evaluates the count of numbers between two double numbers - - The first parameter. - The second parameter. - The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. - The number of floating point values between and . - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive double or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive float or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive double or NaN) - Evaluates the positive epsilon. See also - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive float or NaN) - Evaluates the positive epsilon. See also - - - - - Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - - Positive Machine epsilon - - - - Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - - Machine epsilon - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - Thrown if is smaller than zero. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - - - Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps - between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance - of 1 is passed then the result will be true only if the two numbers have the same binary representation - OR if they are two adjacent numbers that only differ by one step. - - - The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article - at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to - .NET enabled code without using pointers and unsafe code. - - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two vectors and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Support Interface for Precision Operations (like AlmostEquals). - - Type of the implementing class. - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - A norm of this value. - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - The value to compare with. - A norm of the difference between this and the other value. - - - - Consistency vs. performance trade-off between runs on different machines. - - - - Consistent on the same CPU only (maximum performance) - - - Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) - - - Consistent on Intel CPUs supporting SSE2 or later - - - Consistent on Intel CPUs supporting SSE4.2 or later - - - Consistent on Intel CPUs supporting AVX or later - - - Consistent on Intel CPUs supporting AVX2 or later - - - - Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsFFTProviderPath` or the default probing paths. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsFFTProvider" environment variable, - or fall back to the best provider. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 - will cause k*k in the Bluestein sequence to overflow (GH-286). - - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Half rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Fully rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Radix-2 Reorder Helper Method - - Sample type - Sample vector - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - How to transpose a matrix. - - - - - Don't transpose a matrix. - - - - - Transpose a matrix. - - - - - Conjugate transpose a complex matrix. - - If a conjugate transpose is used with a real matrix, then the matrix is just transposed. - - - - Types of matrix norms. - - - - - The 1-norm. - - - - - The Frobenius norm. - - - - - The infinity norm. - - - - - The largest absolute value norm. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - Supported data types are Double, Single, Complex, and Complex32. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiply elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the full QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by QR factor. This is only used for the managed provider and can be - null for the native provider. The native provider uses the Q portion stored in the R matrix. - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - On entry the B matrix; on exit the X matrix. - The number of columns of B. - On exit, the solution matrix. - Rows must be greater or equal to columns. - The type of QR factorization to perform. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Gets or sets the linear algebra provider. - Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsLAProviderPath` or the default probing paths. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsLAProvider" environment variable, - or fall back to the best provider. - - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - Uses and uses the value of - to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - Uses the value of to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - Uses - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - if set to true , the class is thread safe. - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Random number generator using Mersenne Twister 19937 algorithm. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - Uses the value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - A 32-bit combined multiple recursive generator with 2 components of order 3. - - Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. - - - The type bases upon the implementation in the - Boost Random Number Library. - It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on - Wikipedia - Lagged Fibonacci generator. - - - - - Default value for the ShortLag - - - - - Default value for the LongLag - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The ShortLag value - TheLongLag value - - - - Gets the short lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Gets the long lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Stores an array of random numbers - - - - - Stores an index for the random number array element that will be accessed next. - - - - - Fills the array with new unsigned random numbers. - - - Generated random numbers are 32-bit unsigned integers greater than or equal to 0 - and less than or equal to . - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - This class implements extension methods for the System.Random class. The extension methods generate - pseudo-random distributed numbers for types other than double and int32. - - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random bytes. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers within the specified range. - - The random number generator. - The array to fill with random values. - Lower bound, inclusive. - Upper bound, exclusive. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative random number less than . - - The random number generator. - - A 64-bit signed integer greater than or equal to 0, and less than ; that is, - the range of return values includes 0 but not . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int32 range. - - The random number generator. - - A 32-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int64 range. - - The random number generator. - - A 64-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative decimal floating point random number less than 1.0. - - The random number generator. - - A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, - the range of return values includes 0.0 but not 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random boolean. - - The random number generator. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Provides a time-dependent seed value, matching the default behavior of System.Random. - WARNING: There is no randomness in this seed and quick repeated calls can cause - the same seed value. Do not use for cryptography! - - - - - Provides a seed based on time and unique GUIDs. - WARNING: There is only low randomness in this seed, but at least quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. - WARNING: There is only medium randomness in this seed, but quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Base class for random number generators. This class introduces a layer between - and the Math.Net Numerics random number generators to provide thread safety. - When used directly it use the System.Random as random number source. - - - - - Initializes a new instance of the class using - the value of to set whether - the instance is thread safe or not. - - - - - Initializes a new instance of the class. - - if set to true , the class is thread safe. - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The array to fill with random values. - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The size of the array to fill. - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than . - - - - - Returns a random number less then a specified maximum. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - A 32-bit signed integer less than . - is zero or negative. - - - - Returns a random number within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. - - is greater than . - - - - Fills an array with random 32-bit signed integers greater than or equal to zero and less than . - - The array to fill with random values. - - - - Returns an array with random 32-bit signed integers greater than or equal to zero and less than . - - The size of the array to fill. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . - - - - - Returns an infinite sequence of random numbers within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Fills the elements of a specified array of bytes with random numbers. - - An array of bytes to contain random numbers. - is null. - - - - Returns a random number between 0.0 and 1.0. - - A double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 1982 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: - An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 2006 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". - Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. - Xn = a * Xn−3 + c mod 2^32 - http://www.jstatsoft.org/v08/i14/paper - - - - - The default value for X1. - - - - - The default value for X2. - - - - - The default value for the multiplier. - - - - - The default value for the carry over. - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Seed or last but three unsigned random number. - - - - - Last but two unsigned random number. - - - - - Last but one unsigned random number. - - - - - The value of the carry over. - - - - - The multiplier. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Note: must be less than . - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Xoshiro256** pseudo random number generator. - A random number generator based on the class in the .NET library. - - - This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has - excellent(sub-ns) speed, a state space(256 bits) that is large enough - for any parallel application, and it passes all tests we are aware of. - - For generating just floating-point numbers, xoshiro256+ is even faster. - - The state must be seeded so that it is not everywhere zero.If you have - a 64-bit seed, we suggest to seed a splitmix64 generator and use its - output to fill s. - - For further details see: - David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". - https://arxiv.org/abs/1805.01407 - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Splitmix64 RNG. - - RNG state. This can take any value, including zero. - A new random UInt64. - - Splitmix64 produces equidistributed outputs, thus if a zero is generated then the - next zero will be after a further 2^64 outputs. - - - - - Bisection root-finding algorithm. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Algorithm by Brent, Van Wijngaarden, Dekker et al. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Helper method useful for preventing rounding errors. - a*sign(b) - - - - Algorithm by Broyden. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - Relative step size for calculating the Jacobian matrix at first step. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Helper method to calculate an approximation of the Jacobian. - - The function. - The argument (initial guess). - The result (of initial guess). - Relative step size for calculating the Jacobian. - - - - Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 - Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html - - - - - Q and R are transformed variables. - - - - - n^(1/3) - work around a negative double raised to (1/3) - - - - - Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false - - - Detect a range containing at least one root. - The function to detect roots from. - Lower value of the range. - Upper value of the range - The growing factor of research. Usually 1.6. - Maximum number of iterations. Usually 50. - True if the bracketing operation succeeded, false otherwise. - This iterative methods stops when two values with opposite signs are found. - - - - Sorting algorithms for single, tuple and triple lists. - - - - - Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. - - The type of elements in the key list. - List to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a range of a list of keys, in place using the quick sort algorithm. - - The type of element in the list. - List to sort. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the primary list. - The type of elements in the secondary list. - List to sort. - List to sort on duplicate primary items, and permute the same way as the key list. - Comparison, defining the primary sort order. - Comparison, defining the secondary sort order. - - - - Recursive implementation for an in place quick sort on a list. - - The type of the list on which the quick sort is performed. - The list which is sorted using quick sort. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. - - The type of the list on which the quick sort is performed. - The type of the list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. - - The type of the list on which the quick sort is performed. - The type of the first list which is automatically reordered accordingly. - The type of the second list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The first list which is automatically reordered accordingly. - The second list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. - - The type of the primary list. - The type of the secondary list. - The list which is sorted using quick sort. - The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. - The method with which to compare two elements of the primary list. - The method with which to compare two elements of the secondary list. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Performs an in place swap of two elements in a list. - - The type of elements stored in the list. - The list in which the elements are stored. - The index of the first element of the swap. - The index of the second element of the swap. - - - - This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the error function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. - - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of Airy function Ai - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of the Airy function Ai. - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Ai. - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi(z). - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Computes the logarithm of the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The logarithm of the Euler Beta function evaluated at z,w. - If or are not positive. - - - - Computes the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The Euler Beta function evaluated at z,w. - If or are not positive. - - - - Returns the lower incomplete (unregularized) beta function - B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The lower incomplete (unregularized) beta function. - - - - Returns the regularized lower incomplete beta function - I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The regularized lower incomplete beta function. - - - - ************************************** - COEFFICIENTS FOR METHOD ErfImp * - ************************************** - - Polynomial coefficients for a numerator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a denominator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - - ************************************** - COEFFICIENTS FOR METHOD ErfInvImp * - ************************************** - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Calculates the error function. - The value to evaluate. - the error function evaluated at given value. - - - returns 1 if x == double.PositiveInfinity. - returns -1 if x == double.NegativeInfinity. - - - - - Calculates the complementary error function. - The value to evaluate. - the complementary error function evaluated at given value. - - - returns 0 if x == double.PositiveInfinity. - returns 2 if x == double.NegativeInfinity. - - - - - Calculates the inverse error function evaluated at z. - The inverse error function evaluated at given value. - - - returns double.PositiveInfinity if z >= 1.0. - returns double.NegativeInfinity if z <= -1.0. - - - Calculates the inverse error function evaluated at z. - value to evaluate. - the inverse error function evaluated at Z. - - - - Implementation of the error function. - - Where to evaluate the error function. - Whether to compute 1 - the error function. - the error function. - - - Calculates the complementary inverse error function evaluated at z. - The complementary inverse error function evaluated at given value. - We have tested this implementation against the arbitrary precision mpmath library - and found cases where we can only guarantee 9 significant figures correct. - - returns double.PositiveInfinity if z <= 0.0. - returns double.NegativeInfinity if z >= 2.0. - - - calculates the complementary inverse error function evaluated at z. - value to evaluate. - the complementary inverse error function evaluated at Z. - - - - The implementation of the inverse error function. - - First intermediate parameter. - Second intermediate parameter. - Third intermediate parameter. - the inverse error function. - - - - Computes the generalized Exponential Integral function (En). - - The argument of the Exponential Integral function. - Integer power of the denominator term. Generalization index. - The value of the Exponential Integral function. - - This implementation of the computation of the Exponential Integral function follows the derivation in - "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by - Dover Publications, New York), Chapters 6, 7, and 26. - AND - "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 - - - for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. - for 0 < x <= 1 uses Taylor series expansion - - Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. - - - - - Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up - to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. - - A value value! for value > 0 - - If you need to multiply or divide various such factorials, consider using the logarithmic version - instead so you can add instead of multiply and subtract instead of divide, and - then exponentiate the result using . This will also circumvent the problem that - factorials become very large even for small parameters. - - - - - - Computes the factorial of an integer. - - - - - Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. - - A value value! for value > 0 - - - - Computes the binomial coefficient: n choose k. - - A nonnegative value n. - A nonnegative value h. - The binomial coefficient: n choose k. - - - - Computes the natural logarithm of the binomial coefficient: ln(n choose k). - - A nonnegative value n. - A nonnegative value h. - The logarithmic binomial coefficient: ln(n choose k). - - - - Computes the multinomial coefficient: n choose n1, n2, n3, ... - - A nonnegative value n. - An array of nonnegative values that sum to . - The multinomial coefficient. - if is . - If or any of the are negative. - If the sum of all is not equal to . - - - - The order of the approximation. - - - - - Auxiliary variable when evaluating the function. - - - - - Polynomial coefficients for the approximation. - - - - - Computes the logarithm of the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. - - - - - Computes the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - - Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. - - - - - Returns the upper incomplete regularized gamma function - Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete regularized gamma function. - - - - Returns the upper incomplete gamma function - Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete gamma function. - - - - Returns the lower incomplete gamma function - gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the lower incomplete regularized gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the inverse P^(-1) of the regularized lower incomplete gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, - such that P^(-1)(a,P(a,x)) == x. - - - - - Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. - This implementation is based on - Jose Bernardo - Algorithm AS 103: - Psi ( Digamma ) Function, - Applied Statistics, - Volume 25, Number 3, 1976, pages 315-317. - Using the modifications as in Tom Minka's lightspeed toolbox. - - The argument of the digamma function. - The value of the DiGamma function at . - - - - Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will - only return solutions that are positive. - This implementation is based on the bisection method. - - The argument of the inverse digamma function. - The positive solution to the inverse DiGamma function at . - - - - Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Rising Factorial for x and n - - - - Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Falling Factorial for x and n - - - - A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. - This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation - see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function - - The list of coefficients in the numerator - The list of coefficients in the denominator - The variable in the power series - The value of the Generalized HyperGeometric Function. - - - - Returns the Hankel function of the first kind. - HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the first kind. - - - - Returns the exponentially scaled Hankel function of the first kind. - ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the first kind. - - - - Returns the Hankel function of the second kind. - HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the second kind. - - - - Returns the exponentially scaled Hankel function of the second kind. - ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the second kind. - - - - Computes the 'th Harmonic number. - - The Harmonic number which needs to be computed. - The t'th Harmonic number. - - - - Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) - - The order parameter. - The power parameter. - General Harmonic number. - - - - Returns the Kelvin function of the first kind. - KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function of the first kind. - - - - Returns the Kelvin function ber. - KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function ber. - KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(x) is equivalent to KelvinBer(0, x). - - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function bei. - KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the Kelvin function bei. - KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBei(x) is equivalent to KelvinBei(0, x). - - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the derivative of the Kelvin function ber. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function ber - - - - Returns the derivative of the Kelvin function ber. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ber. - - - - Returns the derivative of the Kelvin function bei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function bei. - - - - Returns the derivative of the Kelvin function bei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function bei. - - - - Returns the Kelvin function of the second kind - KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). - KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - The order of the Kelvin function. - The value to calculate the kelvin function of, - - - - - Returns the Kelvin function ker. - KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function ker. - KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKer(x) is equivalent to KelvinKer(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function kei. - KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the Kelvin function kei. - KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKei(x) is equivalent to KelvinKei(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the derivative of the Kelvin function ker. - - The order of the Kelvin function. - The non-negative real value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function ker. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function kei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Returns the derivative of the Kelvin function kei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic - - The parameter for which to compute the logistic function. - The logistic function of . - - - - Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit - - The parameter for which to compute the logit function. This number should be - between 0 and 1. - The logarithm of divided by 1.0 - . - - - - ************************************** - COEFFICIENTS FOR METHODS bessi0 * - ************************************** - - Chebyshev coefficients for exp(-x) I0(x) - in the interval [0, 8]. - - lim(x->0){ exp(-x) I0(x) } = 1. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I0(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessi1 * - ************************************** - - Chebyshev coefficients for exp(-x) I1(x) / x - in the interval [0, 8]. - - lim(x->0){ exp(-x) I1(x) / x } = 1/2. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I1(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk0, bessk0e * - ************************************** - - Chebyshev coefficients for K0(x) + log(x/2) I0(x) - in the interval [0, 2]. The odd order coefficients are all - zero; only the even order coefficients are listed. - - lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. - - - - Chebyshev coefficients for exp(x) sqrt(x) K0(x) - in the inverted interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk1, bessk1e * - ************************************** - - Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) - in the interval [0, 2]. - - lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. - - - - Chebyshev coefficients for exp(x) sqrt(x) K1(x) - in the interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). - - - - Returns the modified Bessel function of first kind, order 0 of the argument. -

- The function is defined as i0(x) = j0( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of first kind, - order 1 of the argument. -

- The function is defined as i1(x) = -i j1( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of the second kind - of order 0 of the argument. -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 0 of the argument. - - The value to compute the Bessel function of. - - - - Returns the modified Bessel function of the second kind - of order 1 of the argument. -

- The range is partitioned into the two intervals [0, 2] and - (2, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 1 of the argument. -

- k1e(x) = exp(x) * k1(x). -

- The value to compute the Bessel function of. - -
- - - Returns the modified Struve function of order 0. - - The value to compute the function of. - - - - Returns the modified Struve function of order 1. - - The value to compute the function of. - - - - Returns the difference between the Bessel I0 and Struve L0 functions. - - The value to compute the function of. - - - - Returns the difference between the Bessel I1 and Struve L1 functions. - - The value to compute the function of. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Numerically stable exponential minus one, i.e. x -> exp(x)-1 - - A number specifying a power. - Returns exp(power)-1. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Evaluation functions, useful for function approximation. - - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Numerically stable series summation - - provides the summands sequentially - Sum - - - Evaluates the series of Chebyshev polynomials Ti at argument x/2. - The series is given by -
-                  N-1
-                   - '
-            y  =   >   coef[i] T (x/2)
-                   -            i
-                  i=0
-            
- Coefficients are stored in reverse order, i.e. the zero - order term is last in the array. Note N is the number of - coefficients, not the order. -

- If coefficients are for the interval a to b, x must - have been transformed to x -> 2(2x - b - a)/(b-a) before - entering the routine. This maps x from (a, b) to (-1, 1), - over which the Chebyshev polynomials are defined. -

- If the coefficients are for the inverted interval, in - which (a, b) is mapped to (1/b, 1/a), the transformation - required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, - this becomes x -> 4a/x - 1. -

- SPEED: -

- Taking advantage of the recurrence properties of the - Chebyshev polynomials, the routine requires one more - addition per loop than evaluating a nested polynomial of - the same degree. -

- The coefficients of the polynomial. - Argument to the polynomial. - - Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs -

- Marked as Deprecated in - http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html - - - -

- Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. - - The no. of terms in the sequence. - The coefficients of the Chebyshev series, length n+1. - The value at which the series is to be evaluated. - - ORIGINAL AUTHOR: - Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND - REFERENCES: - "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" - J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 - -
- - - Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. - This function has a global minimum at (1,1) with f(1,1) = 0. - Common range: [-5,10] or [-2.048,2.048]. - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Valley-shaped Rosenbrock function for 2 or more dimensions. - This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 - This function has 4 global minima with f(x,y) = 0. - Common range: [-6,6]. - Named after David Mautner Himmelblau - - - https://en.wikipedia.org/wiki/Himmelblau%27s_function - - - - - Rastrigin, a highly multi-modal function with many local minima. - Global minimum of all zeros with f(0) = 0. - Common range: [-5.12,5.12]. - - - https://en.wikipedia.org/wiki/Rastrigin_function - http://www.sfu.ca/~ssurjano/rastr.html - - - - - Drop-Wave, a multi-modal and highly complex function with many local minima. - Global minimum of all zeros with f(0) = -1. - Common range: [-5.12,5.12]. - - - http://www.sfu.ca/~ssurjano/drop.html - - - - - Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. - Global minimum of all zeros with f(0) = 0. - Common range: [-32.768, 32.768]. - - - http://www.sfu.ca/~ssurjano/ackley.html - - - - - Bowl-shaped first Bohachevsky function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-100, 100] - - - http://www.sfu.ca/~ssurjano/boha.html - - - - - Plate-shaped Matyas function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-10, 10]. - - - http://www.sfu.ca/~ssurjano/matya.html - - - - - Valley-shaped six-hump camel back function. - Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). - Common range: x in [-3,3], y in [-2,2]. - - - http://www.sfu.ca/~ssurjano/camel6.html - - - - - Statistics operating on arrays assumed to be unsorted. - WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. - - - - - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - A class with correlation measures between two datasets. - - - - - Auto-correlation function (ACF) based on FFT for all possible lags k. - - Data array to calculate auto correlation for. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. - - The data array to calculate auto correlation for. - Max lag to calculate ACF for must be positive and smaller than x.Length. - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function based on FFT for lags k. - - The data array to calculate auto correlation for. - Array with lags to calculate ACF for. - An array with the ACF as a function of the lags k. - - - - The internal method for calculating the auto-correlation. - - The data array to calculate auto-correlation for - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length - Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length - An array with the ACF as a function of the lags k. - - - - Computes the Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - The Pearson product-moment correlation coefficient. - - - - Computes the Weighted Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - Corresponding weights of data. - The Weighted Pearson product-moment correlation coefficient. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Array of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Enumerable of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Spearman Ranked Correlation coefficient. - - Sample data series A. - Sample data series B. - The Spearman ranked correlation coefficient. - - - - Computes the Spearman Ranked Correlation matrix. - - Array of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the Spearman Ranked Correlation matrix. - - Enumerable of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the basic statistics of data set. The class meets the - NIST standard of accuracy for mean, variance, and standard deviation - (the only statistics they provide exact values for) and exceeds them - in increased accuracy mode. - Recommendation: consider to use RunningStatistics instead. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Gets the size of the sample. - - The size of the sample. - - - - Gets the sample mean. - - The sample mean. - - - - Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). - - The sample variance. - - - - Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). - - The sample standard deviation. - - - - Gets the sample skewness. - - The sample skewness. - Returns zero if is less than three. - - - - Gets the sample kurtosis. - - The sample kurtosis. - Returns zero if is less than four. - - - - Gets the maximum sample value. - - The maximum sample value. - - - - Gets the minimum sample value. - - The minimum sample value. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Internal use. Method use for setting the statistics. - - For setting Mean. - For setting Variance. - For setting Skewness. - For setting Kurtosis. - For setting Minimum. - For setting Maximum. - For setting Count. - - - - A consists of a series of s, - each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - This IComparer performs comparisons between a point and a bucket. - - - - - Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. - - The first bucket to compare. - The second bucket to compare. - -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. - - - - Lower Bound of the Bucket. - - - - - Upper Bound of the Bucket. - - - - - The number of datapoints in the bucket. - - - Value may be NaN if this was constructed as a argument. - - - - - Initializes a new instance of the Bucket class. - - - - - Constructs a Bucket that can be used as an argument for a - like when performing a Binary search. - - Value to look for - - - - Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. - - A cloned Bucket object. - - - - Width of the Bucket. - - - - - True if this is a single point argument for - when performing a Binary search. - - - - - Default comparer. - - - - - This method check whether a point is contained within this bucket. - - The point to check. - - 0 if the point falls within the bucket boundaries; - -1 if the point is smaller than the bucket, - +1 if the point is larger than the bucket. - - - - Comparison of two disjoint buckets. The buckets cannot be overlapping. - - - 0 if UpperBound and LowerBound are bit-for-bit equal - 1 if This bucket is lower that the compared bucket - -1 otherwise - - - - - Checks whether two Buckets are equal. - - - UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a - difference in Count given by . - - - - - Provides a hash code for this bucket. - - - - - Formats a human-readable string for this bucket. - - - - - A class which computes histograms of data. - - - - - Contains all the Buckets of the Histogram. - - - - - Indicates whether the elements of buckets are currently sorted. - - - - - Initializes a new instance of the Histogram class. - - - - - Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram - will be set to the smallest and largest datapoint. - - The data sequence to build a histogram on. - The number of buckets to use. - - - - Constructs a Histogram with a specific number of equally sized buckets. - - The data sequence to build a histogram on. - The number of buckets to use. - The histogram lower bound. - The histogram upper bound. - - - - Add one data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The datapoint which we want to add. - - - - Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The sequence of datapoints which we want to add. - - - - Adds a Bucket to the Histogram. - - - - - Sort the buckets if needed. - - - - - Returns the Bucket that contains the value v. - - The point to search the bucket for. - A copy of the bucket containing point . - - - - Returns the index in the Histogram of the Bucket - that contains the value v. - - The point to search the bucket index for. - The index of the bucket containing the point. - - - - Returns the lower bound of the histogram. - - - - - Returns the upper bound of the histogram. - - - - - Gets the n'th bucket. - - The index of the bucket to be returned. - A copy of the n'th bucket. - - - - Gets the number of buckets. - - - - - Gets the total number of datapoints in the histogram. - - - - - Prints the buckets contained in the . - - - - - Kernel density estimation (KDE). - - - - - Estimate the probability density function of a random variable. - - - The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. - - - - - Estimate the probability density function of a random variable with a Gaussian kernel. - - - - - Estimate the probability density function of a random variable with an Epanechnikov kernel. - The Epanechnikov kernel is optimal in a mean square error sense. - - - - - Estimate the probability density function of a random variable with a uniform kernel. - - - - - Estimate the probability density function of a random variable with a triangular kernel. - - - - - A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). - This kernel is the default. - - - - - Epanechnikov Kernel: - x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 - - - - - Uniform Kernel: - x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 - - - - - Triangular Kernel: - x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 - - - - - A hybrid Monte Carlo sampler for multivariate distributions. - - - - - Number of parameters in the density function. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of different components of the - momentum. - - - - - Gets or sets the standard deviations used in the sampling of different components of the - momentum. - - When the length of pSdv is not the same as Length. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - 1 using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the a random number generator provided by the user. - A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviations - given by pSdv. This constructor will set the burn interval, the method used for - numerical differentiation and the random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - The method used for numerical differentiation. - When the number of burnInterval iteration is negative. - When the length of pSdv is not the same as x0. - - - - Initialize parameters. - - The current location of the sampler. - - - - Checking that the location and the momentum are of the same dimension and that each component is positive. - - The standard deviations used for sampling the momentum. - When the length of pSdv is not the same as Length or if any - component is negative. - When pSdv is null. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the gradient. Uses a simple three point estimation. - - Function which the gradient is to be evaluated. - The location where the gradient is to be evaluated. - The gradient of the function at the point x. - - - - The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set - of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as - a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used - to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler - (). - - The type of samples this sampler produces. - - - - The delegate type that defines a derivative evaluated at a certain point. - - Function to be differentiated. - Value where the derivative is computed. - - - - Evaluates the energy function of the target distribution. - - - - - The current location of the sampler. - - - - - The number of burn iterations between two samples. - - - - - The size of each step in the Hamiltonian equation. - - - - - The number of iterations in the Hamiltonian equation. - - - - - The algorithm used for differentiation. - - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the number of iterations in the Hamiltonian equation. - - When frog leap steps is negative or zero. - - - - Gets or sets the size of each step in the Hamiltonian equation. - - When step size is negative or zero. - - - - Constructs a new Hybrid Monte Carlo sampler. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - Random number generator used for sampling the momentum. - The method used for differentiation. - When the number of burnInterval iteration is negative. - When either x0, pdfLnP or diff is null. - - - - Returns a sample from the distribution P. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Method used to update the sample location. Used in the end of the loop. - - The old energy. - The old gradient/derivative of the energy. - The new sample. - The new gradient/derivative of the energy. - The new energy. - The difference between the old Hamiltonian and new Hamiltonian. Use to determine - if an update should take place. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Method for doing dot product. - - First vector/scalar in the product. - Second vector/scalar in the product. - - - - Method for adding, multiply the second vector/scalar by factor and then - add it to the first vector/scalar. - - First vector/scalar. - Scalar factor multiplying by the second vector/scalar. - Second vector/scalar. - - - - Multiplying the second vector/scalar by factor and then subtract it from - the first vector/scalar. - - First vector/scalar. - Scalar factor to be multiplied to the second vector/scalar. - Second vector/scalar. - - - - Method for sampling a random momentum. - - Momentum to be randomized. - - - - The Hamiltonian equations that is used to produce the new sample. - - - - - Method to compute the Hamiltonian used in the method. - - The momentum. - The energy. - Hamiltonian=E+p.p/2 - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than or equal to zero. - Throws when value is negative. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than to zero. - Throws when value is negative or zero. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than zero. - Throws when value is negative or zero. - - - - Provides utilities to analysis the convergence of a set of samples from - a . - - - - - Computes the auto correlations of a series evaluated by a function f. - - The series for computing the auto correlation. - The lag in the series - The function used to evaluate the series. - The auto correlation. - Throws if lag is zero or if lag is - greater than or equal to the length of Series. - - - - Computes the effective size of the sample when evaluated by a function f. - - The samples. - The function use for evaluating the series. - The effective size when auto correlation is taken into account. - - - - A method which samples datapoints from a proposal distribution. The implementation of this sampler - is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it doesn't take any parameters; it samples random - variables from the whole domain. - - The type of the datapoints. - A sample from the proposal distribution. - - - - A method which samples datapoints from a proposal distribution given an initial sample. The implementation - of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it samples locally around an initial point. In other words, it - makes a small local move rather than producing a global sample from the proposal. - - The type of the datapoints. - The initial sample. - A sample from the proposal distribution. - - - - A function which evaluates a density. - - The type of data the distribution is over. - The sample we want to evaluate the density for. - - - - A function which evaluates a log density. - - The type of data the distribution is over. - The sample we want to evaluate the log density for. - - - - A function which evaluates the log of a transition kernel probability. - - The type for the space over which this transition kernel is defined. - The new state in the transition. - The previous state in the transition. - The log probability of the transition. - - - - The interface which every sampler must implement. - - The type of samples this sampler produces. - - - - The random number generator for this class. - - - - - Keeps track of the number of accepted samples. - - - - - Keeps track of the number of calls to the proposal sampler. - - - - - Initializes a new instance of the class. - - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Gets or sets the random number generator. - - When the random number generator is null. - - - - Returns one sample. - - - - - Returns a number of samples. - - The number of samples we want. - An array of samples. - - - - Gets the acceptance rate of the sampler. - - - - - Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the - proposal distribution Q is symmetric in comparison to . It does need to - be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. - - The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the target distribution. - - - - - Evaluates the log transition probability for the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis-Hastings sampler using the default random number generator. This - constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - The log transition probability for the proposal distribution. - A method that samples from the proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal - distribution Q is symmetric. All densities are required to be in log space. - - The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the sampling distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis sampler using the default random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - A method that samples from the symmetric proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to - to be normalized, but we do need that for each x, P(x) < Q(x). - - The type of samples this sampler produces. - - - - Evaluates the density function of the sampling distribution. - - - - - Evaluates the density function of the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - Constructs a new rejection sampler using the default random number generator. - - The density of the distribution we want to sample from. - The density of the proposal distribution. - A method that samples from the proposal distribution. - - - - Returns a sample from the distribution P. - - When the algorithms detects that the proposal - distribution doesn't upper bound the target distribution. - - - - A hybrid Monte Carlo sampler for univariate distributions. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of the - momentum. - - - - - Gets or sets the standard deviation used in the sampling of the - momentum. - - When standard deviation is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using a random - number generator provided by the user. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - Random number generator used to sample the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - given by pSdv using a random - number generator provided by the user. This constructor will set both the burn interval and the method used for - numerical differentiation. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - The method used for numerical differentiation. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the derivative. Uses a simple three point estimation. - - Function for which the derivative is to be evaluated. - The location where the derivative is to be evaluated. - The derivative of the function at the point x. - - - - Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using - a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. - - The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - - - - Evaluates the log density function of the target distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - The scale of the slice sampler. - - - - - Constructs a new Slice sampler using the default random - number generator. The burn interval will be set to 0. - - The initial sample. - The density of the distribution we want to sample from. - The scale factor of the slice sampler. - When the scale of the slice sampler is not positive. - - - - Constructs a new slice sampler using the default random number generator. It - will set the number of burnInterval iterations and run a burnInterval phase. - - The initial sample. - The density of the distribution we want to sample from. - The number of iterations in between returning samples. - The scale factor of the slice sampler. - When the number of burnInterval iteration is negative. - When the scale of the slice sampler is not positive. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the scale of the slice sampler. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Running statistics over a window of data, allows updating by adding values. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - Replace ties with their mean (non-integer ranks). Default. - - - Replace ties with their minimum (typical sports ranking). - - - Replace ties with their maximum. - - - Permutation with increasing values at each index of ties. - - - - Running statistics accumulator, allows updating by adding values - or by combining two accumulators. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Evaluates the population skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - - - - Evaluates the population kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - - Create a new running statistics over the combined samples of two existing running statistics. - - - - - Statistics operating on an array already sorted ascendingly. - - - - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Extension methods to return basic statistics on set of data. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - The full population data. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - The full population data. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. - Uses a normalizer (Bessel's correction; type 2). - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness and kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - - The full population data. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - The full population data. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - Null-entries are ignored. - - The data sample sequence. - - - - Evaluates the sample mean over a moving window, for each samples. - Returns NaN if no data is empty or if any entry is NaN. - - The sample stream to calculate the mean of. - The number of last samples to consider. - - - - Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. - Can be used in a streaming way, e.g. on large datasets not fitting into memory. - - - - - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Calculates the entropy of a stream of double values. - Returns NaN if any of the values in the stream are NaN. - - The input stream to evaluate. - - - - - Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. - - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The body to be invoked for each iteration range. - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The partition size for splitting work into smaller pieces. - The body to be invoked for each iteration range. - - - - Executes each of the provided actions inside a discrete, asynchronous task. - - An array of actions to execute. - The actions array contains a null element. - At least one invocation of the actions threw an exception. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Double-precision trigonometry toolkit. - - - - - Constant to convert a degree to grad. - - - - - Converts a degree (360-periodic) angle to a grad (400-periodic) angle. - - The degree to convert. - The converted grad angle. - - - - Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. - - The degree to convert. - The converted radian angle. - - - - Converts a grad (400-periodic) angle to a degree (360-periodic) angle. - - The grad to convert. - The converted degree. - - - - Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. - - The grad to convert. - The converted radian. - - - - Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. - - The radian to convert. - The converted degree. - - - - Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. - - The radian to convert. - The converted grad. - - - - Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). - - - - - Trigonometric Sine of an angle in radian, or opposite / hypotenuse. - - The angle in radian. - The sine of the radian angle. - - - - Trigonometric Sine of a Complex number. - - The complex value. - The sine of the complex number. - - - - Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. - - The angle in radian. - The cosine of an angle in radian. - - - - Trigonometric Cosine of a Complex number. - - The complex value. - The cosine of a complex number. - - - - Trigonometric Tangent of an angle in radian, or opposite / adjacent. - - The angle in radian. - The tangent of the radian angle. - - - - Trigonometric Tangent of a Complex number. - - The complex value. - The tangent of the complex number. - - - - Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. - - The angle in radian. - The cotangent of an angle in radian. - - - - Trigonometric Cotangent of a Complex number. - - The complex value. - The cotangent of the complex number. - - - - Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. - - The angle in radian. - The secant of the radian angle. - - - - Trigonometric Secant of a Complex number. - - The complex value. - The secant of the complex number. - - - - Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. - - The angle in radian. - Cosecant of an angle in radian. - - - - Trigonometric Cosecant of a Complex number. - - The complex value. - The cosecant of a complex number. - - - - Trigonometric principal Arc Sine in radian - - The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Sine of this Complex number. - - The complex value. - The arc sine of a complex number. - - - - Trigonometric principal Arc Cosine in radian - - The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Cosine of this Complex number. - - The complex value. - The arc cosine of a complex number. - - - - Trigonometric principal Arc Tangent in radian - - The opposite for a unit adjacent (i.e. opposite / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Tangent of this Complex number. - - The complex value. - The arc tangent of a complex number. - - - - Trigonometric principal Arc Cotangent in radian - - The adjacent for a unit opposite (i.e. adjacent / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cotangent of this Complex number. - - The complex value. - The arc cotangent of a complex number. - - - - Trigonometric principal Arc Secant in radian - - The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Secant of this Complex number. - - The complex value. - The arc secant of a complex number. - - - - Trigonometric principal Arc Cosecant in radian - - The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cosecant of this Complex number. - - The complex value. - The arc cosecant of a complex number. - - - - Hyperbolic Sine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic sine of the angle. - - - - Hyperbolic Sine of a Complex number. - - The complex value. - The hyperbolic sine of a complex number. - - - - Hyperbolic Cosine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic Cosine of the angle. - - - - Hyperbolic Cosine of a Complex number. - - The complex value. - The hyperbolic cosine of a complex number. - - - - Hyperbolic Tangent in radian - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic tangent of the angle. - - - - Hyperbolic Tangent of a Complex number. - - The complex value. - The hyperbolic tangent of a complex number. - - - - Hyperbolic Cotangent - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cotangent of the angle. - - - - Hyperbolic Cotangent of a Complex number. - - The complex value. - The hyperbolic cotangent of a complex number. - - - - Hyperbolic Secant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic secant of the angle. - - - - Hyperbolic Secant of a Complex number. - - The complex value. - The hyperbolic secant of a complex number. - - - - Hyperbolic Cosecant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cosecant of the angle. - - - - Hyperbolic Cosecant of a Complex number. - - The complex value. - The hyperbolic cosecant of a complex number. - - - - Hyperbolic Area Sine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Sine of this Complex number. - - The complex value. - The hyperbolic arc sine of a complex number. - - - - Hyperbolic Area Cosine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosine of this Complex number. - - The complex value. - The hyperbolic arc cosine of a complex number. - - - - Hyperbolic Area Tangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Tangent of this Complex number. - - The complex value. - The hyperbolic arc tangent of a complex number. - - - - Hyperbolic Area Cotangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cotangent of this Complex number. - - The complex value. - The hyperbolic arc cotangent of a complex number. - - - - Hyperbolic Area Secant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Secant of this Complex number. - - The complex value. - The hyperbolic arc secant of a complex number. - - - - Hyperbolic Area Cosecant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosecant of this Complex number. - - The complex value. - The hyperbolic arc cosecant of a complex number. - - - - Hamming window. Named after Richard Hamming. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hamming window. Named after Richard Hamming. - Periodic version, useful e.g. for FFT purposes. - - - - - Hann window. Named after Julius von Hann. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hann window. Named after Julius von Hann. - Periodic version, useful e.g. for FFT purposes. - - - - - Cosine window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Cosine window. - Periodic version, useful e.g. for FFT purposes. - - - - - Lanczos window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Lanczos window. - Periodic version, useful e.g. for FFT purposes. - - - - - Gauss window. - - - - - Blackman window. - - - - - Blackman-Harris window. - - - - - Blackman-Nuttall window. - - - - - Bartlett window. - - - - - Bartlett-Hann window. - - - - - Nuttall window. - - - - - Flat top window. - - - - - Uniform rectangular (Dirichlet) window. - - - - - Triangular window. - - - - - Tukey tapering window. A rectangular window bounded - by half a cosine window on each side. - - Width of the window - Fraction of the window occupied by the cosine parts - -
-
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll deleted file mode 100755 index 68dad64..0000000 Binary files a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll and /dev/null differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml deleted file mode 100755 index 5f9e8af..0000000 --- a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml +++ /dev/null @@ -1,57152 +0,0 @@ - - - - MathNet.Numerics - - - - - Useful extension methods for Arrays. - - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Copies the values from on array to another. - - The source array. - The destination array. - - - - Enumerative Combinatorics and Counting. - - - - - Count the number of possible variations without repetition. - The order matters and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of distinct variations. - - - - Count the number of possible variations with repetition. - The order matters and each object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of distinct variations with repetition. - - - - Count the number of possible combinations without repetition. - The order does not matter and each object can be chosen only once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - Maximum number of combinations. - - - - Count the number of possible combinations with repetition. - The order does not matter and an object can be chosen more than once. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. - Maximum number of combinations with repetition. - - - - Count the number of possible permutations (without repetition). - - Number of (distinguishable) elements in the set. - Maximum number of permutations without repetition. - - - - Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. - Implemented using Fisher-Yates Shuffling. - - An array of length N that contains (in any order) the integers of the interval [0, N). - Number of (distinguishable) elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation, without repetition, from a data array by reordering the provided array in-place. - Implemented using Fisher-Yates Shuffling. The provided data array will be modified. - - The data array to be reordered. The array will be modified by this routine. - The random number generator to use. Optional; the default random source will be used if null. - - - - Select a random permutation from a data sequence by returning the provided data in random order. - Implemented using Fisher-Yates Shuffling. - - The data elements to be reordered. - The random number generator to use. Optional; the default random source will be used if null. - - - - Generate a random combination, without repetition, by randomly selecting some of N elements. - - Number of elements in the set. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Generate a random combination, without repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - Boolean mask array of length N, for each item true if it is selected. - - - - Select a random combination, without repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination, in the original order. - - - - Generates a random combination, with repetition, by randomly selecting k of N elements. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - Integer mask array of length N, for each item the number of times it was selected. - - - - Select a random combination, with repetition, from a data sequence by selecting k elements in original order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen combination with repetition, in the original order. - - - - Generate a random variation, without repetition, by randomly selecting k of n elements with order. - Implemented using partial Fisher-Yates Shuffling. - - Number of elements in the set. - Number of elements to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. - Implemented using partial Fisher-Yates Shuffling. - - The data source to choose from. - Number of elements (k) to choose from the set. Each element is chosen at most once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation, in random order. - - - - Generate a random variation, with repetition, by randomly selecting k of n elements with order. - - Number of elements in the set. - Number of elements to choose from the set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - An array of length K that contains the indices of the selections as integers of the interval [0, N). - - - - Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. - - The data source to choose from. - Number of elements (k) to choose from the data set. Elements can be chosen more than once. - The random number generator to use. Optional; the default random source will be used if null. - The chosen variation with repetition, in random order. - - - - 32-bit single precision complex numbers class. - - - - The class Complex32 provides all elementary operations - on complex numbers. All the operators +, -, - *, /, ==, != are defined in the - canonical way. Additional complex trigonometric functions - are also provided. Note that the Complex32 structures - has two special constant values and - . - - - - Complex32 x = new Complex32(1f,2f); - Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); - Complex32 z = (x + y) / (x - y); - - - - For mathematical details about complex numbers, please - have a look at the - Wikipedia - - - - - - The real component of the complex number. - - - - - The imaginary component of the complex number. - - - - - Initializes a new instance of the Complex32 structure with the given real - and imaginary parts. - - The value for the real component. - The value for the imaginary component. - - - - Creates a complex number from a point's polar coordinates. - - A complex number. - The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. - The phase, which is the angle from the line to the horizontal axis, measured in radians. - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to one and an imaginary number equal to zero. - - - - - Returns a new instance - with a real number equal to zero and an imaginary number equal to one. - - - - - Returns a new instance - with real and imaginary numbers positive infinite. - - - - - Returns a new instance - with real and imaginary numbers not a number. - - - - - Gets the real component of the complex number. - - The real component of the complex number. - - - - Gets the real imaginary component of the complex number. - - The real imaginary component of the complex number. - - - - Gets the phase or argument of this Complex32. - - - Phase always returns a value bigger than negative Pi and - smaller or equal to Pi. If this Complex32 is zero, the Complex32 - is assumed to be positive real with an argument of zero. - - The phase or argument of this Complex32 - - - - Gets the magnitude (or absolute value) of a complex number. - - Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN - The magnitude of the current instance. - - - - Gets the squared magnitude (or squared absolute value) of a complex number. - - The squared magnitude of the current instance. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex32. - - - - Gets a value indicating whether the Complex32 is zero. - - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - - true if this instance is ; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - - true if this instance is real nonnegative number; otherwise, false. - - - - - Exponential of this Complex32 (exp(x), E^x). - - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex32 (Base E). - - The natural logarithm of this complex number. - - - - Common Logarithm of this Complex32 (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex32 with custom base. - - The logarithm of this complex number. - - - - Raise this Complex32 to the given value. - - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex32 to the inverse of the given value. - - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex32 - - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex32 - - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex32. - - - - - Evaluate all cubic roots of this Complex32. - - - - - Equality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real and imaginary components of the two complex numbers are equal; false otherwise. - - - - Inequality test. - - One of complex numbers to compare. - The other complex numbers to compare. - true if the real or imaginary components of the two complex numbers are not equal; false otherwise. - - - - Unary addition. - - The complex number to operate on. - Returns the same complex number. - - - - Unary minus. - - The complex number to operate on. - The negated value of the . - - - Addition operator. Adds two complex numbers together. - The result of the addition. - One of the complex numbers to add. - The other complex numbers to add. - - - Subtraction operator. Subtracts two complex numbers. - The result of the subtraction. - The complex number to subtract from. - The complex number to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The complex numbers to add. - The float value to add. - - - Subtraction operator. Subtracts float value from a complex value. - The result of the subtraction. - The complex number to subtract from. - The float value to subtract. - - - Addition operator. Adds a complex number and float together. - The result of the addition. - The float value to add. - The complex numbers to add. - - - Subtraction operator. Subtracts complex value from a float value. - The result of the subtraction. - The float vale to subtract from. - The complex value to subtract. - - - Multiplication operator. Multiplies two complex numbers. - The result of the multiplication. - One of the complex numbers to multiply. - The other complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The float value to multiply. - The complex number to multiply. - - - Multiplication operator. Multiplies a complex number with a float value. - The result of the multiplication. - The complex number to multiply. - The float value to multiply. - - - Division operator. Divides a complex number by another. - Enhanced Smith's algorithm for dividing two complex numbers - - The result of the division. - The dividend. - The divisor. - - - - Helper method for dividing. - - Re first - Im first - Re second - Im second - - - - - Division operator. Divides a float value by a complex number. - Algorithm based on Smith's algorithm - - The result of the division. - The dividend. - The divisor. - - - Division operator. Divides a complex number by a float value. - The result of the division. - The dividend. - The divisor. - - - - Computes the conjugate of a complex number and returns the result. - - - - - Returns the multiplicative inverse of a complex number. - - - - - Converts the value of the current complex number to its equivalent string representation in Cartesian form. - - The string representation of the current instance in Cartesian form. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format for its real and imaginary parts. - - The string representation of the current instance in Cartesian form. - A standard or custom numeric format string. - - is not a valid format string. - - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified culture-specific formatting information. - - The string representation of the current instance in Cartesian form, as specified by . - An object that supplies culture-specific formatting information. - - - Converts the value of the current complex number to its equivalent string representation - in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. - The string representation of the current instance in Cartesian form, as specified by and . - A standard or custom numeric format string. - An object that supplies culture-specific formatting information. - - is not a valid format string. - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - The hash code for the complex number. - - - The hash code of the complex number. - - - The hash code is calculated as - System.Math.Exp(ComplexMath.Absolute(complexNumber)). - - - - - Checks if two complex numbers are equal. Two complex numbers are equal if their - corresponding real and imaginary components are equal. - - - Returns true if the two objects are the same object, or if their corresponding - real and imaginary components are equal, false otherwise. - - - The complex number to compare to with. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as float. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Explicit conversion of a real decimal to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Explicit conversion of a Complex to a Complex32. - - The decimal value to convert. - The result of the conversion. - - - - Implicit conversion of a real byte to a Complex32. - - The byte value to convert. - The result of the conversion. - - - - Implicit conversion of a real short to a Complex32. - - The short value to convert. - The result of the conversion. - - - - Implicit conversion of a signed byte to a Complex32. - - The signed byte value to convert. - The result of the conversion. - - - - Implicit conversion of a unsigned real short to a Complex32. - - The unsigned short value to convert. - The result of the conversion. - - - - Implicit conversion of a real int to a Complex32. - - The int value to convert. - The result of the conversion. - - - - Implicit conversion of a BigInteger int to a Complex32. - - The BigInteger value to convert. - The result of the conversion. - - - - Implicit conversion of a real long to a Complex32. - - The long value to convert. - The result of the conversion. - - - - Implicit conversion of a real uint to a Complex32. - - The uint value to convert. - The result of the conversion. - - - - Implicit conversion of a real ulong to a Complex32. - - The ulong value to convert. - The result of the conversion. - - - - Implicit conversion of a real float to a Complex32. - - The float value to convert. - The result of the conversion. - - - - Implicit conversion of a real double to a Complex32. - - The double value to convert. - The result of the conversion. - - - - Converts this Complex32 to a . - - A with the same values as this Complex32. - - - - Returns the additive inverse of a specified complex number. - - The result of the real and imaginary components of the value parameter multiplied by -1. - A complex number. - - - - Computes the conjugate of a complex number and returns the result. - - The conjugate of . - A complex number. - - - - Adds two complex numbers and returns the result. - - The sum of and . - The first complex number to add. - The second complex number to add. - - - - Subtracts one complex number from another and returns the result. - - The result of subtracting from . - The value to subtract from (the minuend). - The value to subtract (the subtrahend). - - - - Returns the product of two complex numbers. - - The product of the and parameters. - The first complex number to multiply. - The second complex number to multiply. - - - - Divides one complex number by another and returns the result. - - The quotient of the division. - The complex number to be divided. - The complex number to divide by. - - - - Returns the multiplicative inverse of a complex number. - - The reciprocal of . - A complex number. - - - - Returns the square root of a specified complex number. - - The square root of . - A complex number. - - - - Gets the absolute value (or magnitude) of a complex number. - - The absolute value of . - A complex number. - - - - Returns e raised to the power specified by a complex number. - - The number e raised to the power . - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a complex number. - - The complex number raised to the power . - A complex number to be raised to a power. - A complex number that specifies a power. - - - - Returns a specified complex number raised to a power specified by a single-precision floating-point number. - - The complex number raised to the power . - A complex number to be raised to a power. - A single-precision floating-point number that specifies a power. - - - - Returns the natural (base e) logarithm of a specified complex number. - - The natural (base e) logarithm of . - A complex number. - - - - Returns the logarithm of a specified complex number in a specified base. - - The logarithm of in base . - A complex number. - The base of the logarithm. - - - - Returns the base-10 logarithm of a specified complex number. - - The base-10 logarithm of . - A complex number. - - - - Returns the sine of the specified complex number. - - The sine of . - A complex number. - - - - Returns the cosine of the specified complex number. - - The cosine of . - A complex number. - - - - Returns the tangent of the specified complex number. - - The tangent of . - A complex number. - - - - Returns the angle that is the arc sine of the specified complex number. - - The angle which is the arc sine of . - A complex number. - - - - Returns the angle that is the arc cosine of the specified complex number. - - The angle, measured in radians, which is the arc cosine of . - A complex number that represents a cosine. - - - - Returns the angle that is the arc tangent of the specified complex number. - - The angle that is the arc tangent of . - A complex number. - - - - Returns the hyperbolic sine of the specified complex number. - - The hyperbolic sine of . - A complex number. - - - - Returns the hyperbolic cosine of the specified complex number. - - The hyperbolic cosine of . - A complex number. - - - - Returns the hyperbolic tangent of the specified complex number. - - The hyperbolic tangent of . - A complex number. - - - - Extension methods for the Complex type provided by System.Numerics - - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the squared magnitude of the Complex number. - - The number to perform this operation on. - The squared magnitude of the Complex number. - - - - Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) - - The unity of this Complex. - - - - Gets the conjugate of the Complex number. - - The number to perform this operation on. - - The semantic of setting the conjugate is such that - - // a, b of type Complex32 - a.Conjugate = b; - - is equivalent to - - // a, b of type Complex32 - a = b.Conjugate - - - The conjugate of the number. - - - - Returns the multiplicative inverse of a complex number. - - - - - Exponential of this Complex (exp(x), E^x). - - The number to perform this operation on. - - The exponential of this complex number. - - - - - Natural Logarithm of this Complex (Base E). - - The number to perform this operation on. - - The natural logarithm of this complex number. - - - - - Common Logarithm of this Complex (Base 10). - - The common logarithm of this complex number. - - - - Logarithm of this Complex with custom base. - - The logarithm of this complex number. - - - - Raise this Complex to the given value. - - The number to perform this operation on. - - The exponent. - - - The complex number raised to the given exponent. - - - - - Raise this Complex to the inverse of the given value. - - The number to perform this operation on. - - The root exponent. - - - The complex raised to the inverse of the given exponent. - - - - - The Square (power 2) of this Complex - - The number to perform this operation on. - - The square of this complex number. - - - - - The Square Root (power 1/2) of this Complex - - The number to perform this operation on. - - The square root of this complex number. - - - - - Evaluate all square roots of this Complex. - - - - - Evaluate all cubic roots of this Complex. - - - - - Gets a value indicating whether the Complex32 is zero. - - The number to perform this operation on. - true if this instance is zero; otherwise, false. - - - - Gets a value indicating whether the Complex32 is one. - - The number to perform this operation on. - true if this instance is one; otherwise, false. - - - - Gets a value indicating whether the Complex32 is the imaginary unit. - - true if this instance is ImaginaryOne; otherwise, false. - The number to perform this operation on. - - - - Gets a value indicating whether the provided Complex32evaluates - to a value that is not a number. - - The number to perform this operation on. - - true if this instance is NaN; otherwise, - false. - - - - - Gets a value indicating whether the provided Complex32 evaluates to an - infinite value. - - The number to perform this operation on. - - true if this instance is infinite; otherwise, false. - - - True if it either evaluates to a complex infinity - or to a directed infinity. - - - - - Gets a value indicating whether the provided Complex32 is real. - - The number to perform this operation on. - true if this instance is a real number; otherwise, false. - - - - Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. - - The number to perform this operation on. - - true if this instance is real nonnegative number; otherwise, false. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - The string to parse. - - - - - Creates a complex number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Parse a part (real or complex) from a complex number. - - Start Token. - Is set to true if the part identified itself as being imaginary. - - An that supplies culture-specific - formatting information. - - Resulting part as double. - - - - - Converts the string representation of a complex number to a double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to double-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - - - Creates a Complex32 number based on a string. The string can be in the - following formats (without the quotes): 'n', 'ni', 'n +/- ni', - 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. - - - A complex number containing the value specified by the given string. - - - the string to parse. - - - An that supplies culture-specific - formatting information. - - - - - Converts the string representation of a complex number to a single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. - - - - - Converts the string representation of a complex number to single-precision complex number equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex number to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. - - - - - A collection of frequently used mathematical constants. - - - - The number e - - - The number log[2](e) - - - The number log[10](e) - - - The number log[e](2) - - - The number log[e](10) - - - The number log[e](pi) - - - The number log[e](2*pi)/2 - - - The number 1/e - - - The number sqrt(e) - - - The number sqrt(2) - - - The number sqrt(3) - - - The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 - - - The number sqrt(3)/2 - - - The number pi - - - The number pi*2 - - - The number pi/2 - - - The number pi*3/2 - - - The number pi/4 - - - The number sqrt(pi) - - - The number sqrt(2pi) - - - The number sqrt(pi/2) - - - The number sqrt(2*pi*e) - - - The number log(sqrt(2*pi)) - - - The number log(sqrt(2*pi*e)) - - - The number log(2 * sqrt(e / pi)) - - - The number 1/pi - - - The number 2/pi - - - The number 1/sqrt(pi) - - - The number 1/sqrt(2pi) - - - The number 2/sqrt(pi) - - - The number 2 * sqrt(e / pi) - - - The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). - - - - - The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). - - - - - The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). - - - The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. - - - The Catalan constant - Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } - - - The Euler-Mascheroni constant - lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } - - - The number (1+sqrt(5))/2, also known as the golden ratio - - - The Glaisher constant - e^(1/12 - Zeta(-1)) - - - The Khinchin constant - prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} - - - - The size of a double in bytes. - - - - - The size of an int in bytes. - - - - - The size of a float in bytes. - - - - - The size of a Complex in bytes. - - - - - The size of a Complex in bytes. - - - - Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) - - - Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) - - - Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) - - - Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) - - - Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) - - - Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) - - - Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) - - - Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) - - - Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) - - - Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) - - - Planck time: t_p = l_p/c_0 [s] (2007 CODATA) - - - Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) - - - Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) - - - Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) - - - Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) - - - Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) - - - Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) - - - Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) - - - Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) - - - Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) - - - Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) - - - Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) - - - Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) - - - Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) - - - Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) - - - Electron Mass: [kg] (2007 CODATA) - - - Electron Mass Energy Equivalent: [J] (2007 CODATA) - - - Electron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Electron Compton Wavelength: [m] (2007 CODATA) - - - Classical Electron Radius: [m] (2007 CODATA) - - - Thomson Cross Section: [m^2] (2002 CODATA) - - - Electron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Electon G-Factor: [1] (2007 CODATA) - - - Muon Mass: [kg] (2007 CODATA) - - - Muon Mass Energy Equivalent: [J] (2007 CODATA) - - - Muon Molar Mass: [kg mol^-1] (2007 CODATA) - - - Muon Compton Wavelength: [m] (2007 CODATA) - - - Muon Magnetic Moment: [J T^-1] (2007 CODATA) - - - Muon G-Factor: [1] (2007 CODATA) - - - Tau Mass: [kg] (2007 CODATA) - - - Tau Mass Energy Equivalent: [J] (2007 CODATA) - - - Tau Molar Mass: [kg mol^-1] (2007 CODATA) - - - Tau Compton Wavelength: [m] (2007 CODATA) - - - Proton Mass: [kg] (2007 CODATA) - - - Proton Mass Energy Equivalent: [J] (2007 CODATA) - - - Proton Molar Mass: [kg mol^-1] (2007 CODATA) - - - Proton Compton Wavelength: [m] (2007 CODATA) - - - Proton Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton G-Factor: [1] (2007 CODATA) - - - Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) - - - Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Neutron Mass: [kg] (2007 CODATA) - - - Neutron Mass Energy Equivalent: [J] (2007 CODATA) - - - Neutron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Neuron Compton Wavelength: [m] (2007 CODATA) - - - Neutron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Neutron G-Factor: [1] (2007 CODATA) - - - Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) - - - Deuteron Mass: [kg] (2007 CODATA) - - - Deuteron Mass Energy Equivalent: [J] (2007 CODATA) - - - Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) - - - Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) - - - Helion Mass: [kg] (2007 CODATA) - - - Helion Mass Energy Equivalent: [J] (2007 CODATA) - - - Helion Molar Mass: [kg mol^-1] (2007 CODATA) - - - Avogadro constant: [mol^-1] (2010 CODATA) - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 000 - - - The SI prefix factor corresponding to 1 000 000 - - - The SI prefix factor corresponding to 1 000 - - - The SI prefix factor corresponding to 100 - - - The SI prefix factor corresponding to 10 - - - The SI prefix factor corresponding to 0.1 - - - The SI prefix factor corresponding to 0.01 - - - The SI prefix factor corresponding to 0.001 - - - The SI prefix factor corresponding to 0.000 001 - - - The SI prefix factor corresponding to 0.000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 - - - The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 - - - - Sets parameters for the library. - - - - - Use a specific provider if configured, e.g. using - environment variables, or fall back to the best providers. - - - - - Use the best provider available. - - - - - Use the Intel MKL native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Intel MKL native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the Nvidia CUDA native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the Nvidia CUDA native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Use the OpenBLAS native provider for linear algebra. - Throws if it is not available or failed to initialize, in which case the previous provider is still active. - - - - - Try to use the OpenBLAS native provider for linear algebra. - - - True if the provider was found and initialized successfully. - False if it failed and the previous provider is still active. - - - - - Try to use any available native provider in an undefined order. - - - True if one of the native providers was found and successfully initialized. - False if it failed and the previous provider is still active. - - - - - Gets or sets a value indicating whether the distribution classes check validate each parameter. - For the multivariate distributions this could involve an expensive matrix factorization. - The default setting of this property is true. - - - - - Gets or sets a value indicating whether to use thread safe random number generators (RNG). - Thread safe RNG about two and half time slower than non-thread safe RNG. - - - true to use thread safe random number generators ; otherwise, false. - - - - - Optional path to try to load native provider binaries from. - - - - - Gets or sets a value indicating how many parallel worker threads shall be used - when parallelization is applicable. - - Default to the number of processor cores, must be between 1 and 1024 (inclusive). - - - - Gets or sets the TaskScheduler used to schedule the worker tasks. - - - - - Gets or sets the order of the matrix when linear algebra provider - must calculate multiply in parallel threads. - - The order. Default 64, must be at least 3. - - - - Gets or sets the number of elements a vector or matrix - must contain before we multiply threads. - - Number of elements. Default 300, must be at least 3. - - - - Numerical Derivative. - - - - - Initialized a NumericalDerivative with the given points and center. - - - - - Initialized a NumericalDerivative with the default points and center for the given order. - - - - - Evaluates the derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - Derivative order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Univariate function handle. - Derivative order. - - - - Evaluates the first derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the first derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the second derivative of a scalar univariate function. - - Univariate function handle. - Point at which to evaluate the derivative. - - - - Creates a function handle for the second derivative of a scalar univariate function. - - Univariate function handle. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a multivariate function. - - Multivariate function handle. - Index of independent variable for partial derivative. - - - - Evaluates the partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - - - - Creates a function handle for the partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - Derivative order. - - - - Evaluates the first partial derivative of a bivariate function. - - Bivariate function handle. - First argument at which to evaluate the derivative. - Second argument at which to evaluate the derivative. - Index of independent variable for partial derivative. - - - - Creates a function handle for the first partial derivative of a bivariate function. - - Bivariate function handle. - Index of independent variable for partial derivative. - - - - Class to calculate finite difference coefficients using Taylor series expansion method. - - - For n points, coefficients are calculated up to the maximum derivative order possible (n-1). - The current function value position specifies the "center" for surrounding coefficients. - Selecting the first, middle or last positions represent forward, backwards and central difference methods. - - - - - - - Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. - - - - - Initializes a new instance of the class. - - Number of finite difference coefficients. - - - - Gets the finite difference coefficients for a specified center and order. - - Current function position with respect to coefficients. Must be within point range. - Order of finite difference coefficients. - Vector of finite difference coefficients. - - - - Gets the finite difference coefficients for all orders at a specified center. - - Current function position with respect to coefficients. Must be within point range. - Rectangular array of coefficients, with columns specifying order. - - - - Type of finite different step size. - - - - - The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. - - - - - A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however - this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the - function input parameter and not the order of the finite difference derivative. - - - - - A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order - and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a - base step size, h, that is equivalent to scaling. This step size is then scaled according to the function - input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). - - - - - Class to evaluate the numerical derivative of a function using finite difference approximations. - Variable point and center methods can be initialized . - This class can also be used to return function handles (delegates) for a fixed derivative order and variable. - It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. - - - - - Initializes a NumericalDerivative class with the default 3 point center difference method. - - - - - Initialized a NumericalDerivative class. - - Number of points for finite difference derivatives. - Location of the center with respect to other points. Value ranges from zero to points-1. - - - - Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. - If the base step size used in scaling is desired, see . - - - Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a - base-2 representable number to improve finite difference accuracy. - - - - - Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. - However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. - - - - - Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. - By default this is set to machine epsilon, from which is computed. - - - - - Sets and gets the location of the center point for the finite difference derivative. - - - - - Number of times a function is evaluated for numerical derivatives. - - - - - Type of step size for computing finite differences. If set to absolute, dx = h. - If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when - h is approximately equal to the square-root of machine accuracy, epsilon. - - - - - Evaluates the derivative of equidistant points using the finite difference method. - - Vector of points StepSize apart. - Derivative order. - Finite difference step size. - Derivative of points of the specified order. - - - - Evaluates the derivative of a scalar univariate function. - - - Supplying the optional argument currentValue will reduce the number of function evaluations - required to calculate the finite difference derivative. - - Function handle. - Point at which to compute the derivative. - Derivative order. - Current function value at center. - Function derivative at x of the specified order. - - - - Creates a function handle for the derivative of a scalar univariate function. - - Input function handle. - Derivative order. - Function handle that evaluates the derivative of input function at a fixed order. - - - - Evaluates the partial derivative of a multivariate function. - - Multivariate function handle. - Vector at which to evaluate the derivative. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Function partial derivative at x of the specified order. - - - - Evaluates the partial derivatives of a multivariate function array. - - - This function assumes the input vector x is of the correct length for f. - - Multivariate vector function array handle. - Vector at which to evaluate the derivatives. - Index of independent variable for partial derivative. - Derivative order. - Current function value at center. - Vector of functions partial derivatives at x of the specified order. - - - - Creates a function handle for the partial derivative of a multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at a fixed order. - - - - Creates a function handle for the partial derivative of a vector multivariate function. - - Input function handle. - Index of the independent variable for partial derivative. - Derivative order. - Function handle that evaluates partial derivative of input function at fixed order. - - - - Evaluates the mixed partial derivative of variable order for multivariate functions. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function handle. - Points at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivative at x of the specified order. - - - - Evaluates the mixed partial derivative of variable order for multivariate function arrays. - - - This function recursively uses to evaluate mixed partial derivative. - Therefore, it is more efficient to call for higher order derivatives of - a single independent variable. - - Multivariate function array handle. - Vector at which to evaluate the derivative. - Vector of indices for the independent variables at descending derivative orders. - Highest order of differentiation. - Current function value at center. - Function mixed partial derivatives at x of the specified order. - - - - Creates a function handle for the mixed partial derivative of a multivariate function. - - Input function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Creates a function handle for the mixed partial derivative of a multivariate vector function. - - Input vector function handle. - Vector of indices for the independent variables at descending derivative orders. - Highest derivative order. - Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. - - - - Resets the evaluation counter. - - - - - Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Hessian object with a three point central difference method. - - - - - Creates a numerical Hessian with a specified differentiation scheme. - - Number of points for Hessian evaluation. - Center point for differentiation. - - - - Evaluates the Hessian of the scalar univariate function f at points x. - - Scalar univariate function handle. - Point at which to evaluate Hessian. - Hessian tensor. - - - - Evaluates the Hessian of a multivariate function f at points x. - - - This method of computing the Hessian is only valid for Lipschitz continuous functions. - The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. - - Multivariate function handle.> - Points at which to evaluate Hessian.> - Hessian tensor. - - - - Resets the function evaluation counter for the Hessian. - - - - - Class for evaluating the Jacobian of a function using finite differences. - By default, a central 3-point method is used. - - - - - Number of function evaluations. - - - - - Creates a numerical Jacobian object with a three point central difference method. - - - - - Creates a numerical Jacobian with a specified differentiation scheme. - - Number of points for Jacobian evaluation. - Center point for differentiation. - - - - Evaluates the Jacobian of scalar univariate function f at point x. - - Scalar univariate function handle. - Point at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x. - - - This function assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function f at vector x given a current function value. - - - To minimize the number of function evaluations, a user can supply the current value of the function - to be used in computing the Jacobian. This value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function handle. - Points at which to evaluate Jacobian. - Current function value at finite difference center. - Jacobian vector. - - - - Evaluates the Jacobian of a multivariate function array f at vector x. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Jacobian matrix. - - - - Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. - - - To minimize the number of function evaluations, a user can supply a vector of current values of the functions - to be used in computing the Jacobian. These value must correspond to the "center" location for the - finite differencing. If a scheme is used where the center value is not evaluated, this will provide no - added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. - - Multivariate function array handle. - Vector at which to evaluate Jacobian. - Vector of current function values. - Jacobian matrix. - - - - Resets the function evaluation counter for the Jacobian. - - - - - Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Double-Exponential integration. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The number of Gauss-Legendre points. - Approximation of the differintegral of order n at x. - - - - Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. - - - order = 1.0 : normal derivative - order = 0.5 : semi-derivative - order = -0.5 : semi-integral - order = -1.0 : normal integral - - The analytic smooth function to differintegrate. - The evaluation point. - The order of fractional derivative. - The reference point of integration. - The expected relative accuracy of the Gauss-Kronrod integration. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the differintegral of order n at x. - - - - Metrics to measure the distance between two structures. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Euclidean Distance, i.e. the L2-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Manhattan Distance, i.e. the L1-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Chebyshev Distance, i.e. the Infinity-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Minkowski Distance, i.e. the generalized p-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Canberra Distance, a weighted version of the L1-norm of the difference. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Cosine Distance, representing the angular distance while ignoring the scale. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Hamming Distance, i.e. the number of positions that have different values in the vectors. - - - - - Pearson's distance, i.e. 1 - the person correlation coefficient. - - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Jaccard distance, i.e. 1 - the Jaccard index. - - Thrown if a or b are null. - Throw if a and b are of different lengths. - Jaccard distance. - - - - Discrete Univariate Bernoulli distribution. - The Bernoulli distribution is a distribution over bits. The parameter - p specifies the probability that a 1 is generated. - Wikipedia - Bernoulli distribution. - - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - If the Bernoulli parameter is not in the range [0,1]. - - - - Initializes a new instance of the Bernoulli class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - If the Bernoulli parameter is not in the range [0,1]. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Generates one sample from the Bernoulli distribution. - - The random source to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A random sample from the Bernoulli distribution. - - - - Samples a Bernoulli distributed random variable. - - A sample from the Bernoulli distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Samples a Bernoulli distributed random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - A sample from the Bernoulli distribution. - - - - Samples a sequence of Bernoulli distributed random variables. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - a sequence of samples from the distribution. - - - - Continuous Univariate Beta distribution. - For details about this distribution, see - Wikipedia - Beta distribution. - - - There are a few special cases for the parameterization of the Beta distribution. When both - shape parameters are positive infinity, the Beta distribution degenerates to a point distribution - at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point - distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution - degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the - distribution degenerates to a point distribution at the non-zero shape parameter. - - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Beta class. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - A string representation of the Beta distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. - - - - - Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Beta distribution. - - - - - Gets the variance of the Beta distribution. - - - - - Gets the standard deviation of the Beta distribution. - - - - - Gets the entropy of the Beta distribution. - - - - - Gets the skewness of the Beta distribution. - - - - - Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the Beta distribution. - - - - - Gets the minimum of the Beta distribution. - - - - - Gets the maximum of the Beta distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Beta distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Beta distribution. - - a sequence of samples from the distribution. - - - - Samples Beta distributed random variables by sampling two Gamma variables and normalizing. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a random number from the Beta distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Beta-Binomial distribution. - The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising - when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. - The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. - It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. - Wikipedia - Beta-Binomial distribution. - - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Initializes a new instance of the class. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location in the domain where we want to evaluate the probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The number of Bernoulli trials n - n is a positive integer - Shape parameter alpha of the Beta distribution. Range: a > 0. - Shape parameter beta of the Beta distribution. Range: b > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a random number from the BetaBinomial distribution. - - - - Samples a BetaBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of BetaBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a BetaBinomial distributed random variable. - - The random number generator to use. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Samples an array of BetaBinomial distributed random variables. - - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the Beta distribution. Range: α ≥ 0. - The β shape parameter of the Beta distribution. Range: β ≥ 0. - The number of trials (n). Range: n ≥ 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Initializes a new instance of the BetaScaled class. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - - - - Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast - is used to construct an underlying beta distribution. - - The minimum value. - The maximum value. - The most likely value (mode). - The random number generator which is used to draw random samples. - The Beta distribution derived from the PERT parameters. - - - - A string representation of the distribution. - - A string representation of the BetaScaled distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. - - - - - Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. - - - - - Gets the location (μ) of the BetaScaled distribution. - - - - - Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the BetaScaled distribution. - - - - - Gets the variance of the BetaScaled distribution. - - - - - Gets the standard deviation of the BetaScaled distribution. - - - - - Gets the entropy of the BetaScaled distribution. - - - - - Gets the skewness of the BetaScaled distribution. - - - - - Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. - - - - - Gets the median of the BetaScaled distribution. - - - - - Gets the minimum of the BetaScaled distribution. - - - - - Gets the maximum of the BetaScaled distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The α shape parameter of the BetaScaled distribution. Range: α > 0. - The β shape parameter of the BetaScaled distribution. Range: β > 0. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Binomial distribution. - For details about this distribution, see - Wikipedia - Binomial distribution. - - - The distribution is parameterized by a probability (between 0.0 and 1.0). - - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - If is not in the interval [0.0,1.0]. - If is negative. - - - - Initializes a new instance of the Binomial class. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The random number generator which is used to draw random samples. - If is not in the interval [0.0,1.0]. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - - - - Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. - - - - - Gets the number of trials. Range: n ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets all modes of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the Binomial distribution without doing parameter checking. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successful trials. - - - - Samples a Binomially distributed random variable. - - The number of successes in N trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Binomially distributed random variables. - - a sequence of successes in N trials. - - - - Samples a binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The random number generator to use. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Samples a binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - The number of successes in trials. - - - - Samples a sequence of binomially distributed random variable. - - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. - The number of trials (n). Range: n ≥ 0. - a sequence of successes in trials. - - - - Gets the scale (a) of the distribution. Range: a > 0. - - - - - Gets the first shape parameter (c) of the distribution. Range: c > 0. - - - - - Gets the second shape parameter (k) of the distribution. Range: k > 0. - - - - - Initializes a new instance of the Burr Type XII class. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Burr distribution. - - - - - Gets the variance of the Burr distribution. - - - - - Gets the standard deviation of the Burr distribution. - - - - - Gets the mode of the Burr distribution. - - - - - Gets the minimum of the Burr distribution. - - - - - Gets the maximum of the Burr distribution. - - - - - Gets the entropy of the Burr distribution (currently not supported). - - - - - Gets the skewness of the Burr distribution. - - - - - Gets the median of the Burr distribution. - - - - - Generates a sample from the Burr distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the Burr distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale parameter a of the Burr distribution. Range: a > 0. - The first shape parameter c of the Burr distribution. Range: c > 0. - The second shape parameter k of the Burr distribution. Range: k > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Discrete Univariate Categorical distribution. - For details about this distribution, see - Wikipedia - Categorical distribution. This - distribution is sometimes called the Discrete distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - Support: 0..k where k = length(probability mass array)-1 - - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - - - - Initializes a new instance of the Categorical class from a . The distribution - will not be automatically updated when the histogram changes. The categorical distribution will have - one value for each bucket and a probability for that value proportional to the bucket count. - - The histogram from which to create the categorical variable. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Checks whether the parameters of the distribution are valid. - - An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. - If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true - - - - Gets the probability mass vector (non-negative ratios) of the multinomial. - - Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a . - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets he mode of the distribution. - - Throws a . - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. - - An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. - A real number between 0 and 1. - An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. - - - - Computes the cumulative distribution function. This method performs no parameter checking. - If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - An array representing the unnormalized cumulative distribution function. - - - - Returns one trials from the categorical distribution. - - The random number generator to use. - The (unnormalized) cumulative distribution of the probability distribution. - One sample from the categorical distribution implied by . - - - - Samples a Binomially distributed random variable. - - The number of successful trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Bernoulli distributed random variables. - - a sequence of successful trial counts. - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of nonnegative ratios. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of nonnegative ratios. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - The random number generator to use. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Samples one categorical distributed random variable; also known as the Discrete distribution. - - An array of the cumulative distribution. Not assumed to be normalized. - One random integer between 0 and the size of the categorical (exclusive). - - - - Samples a categorically distributed random variable. - - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - An array of the cumulative distribution. Not assumed to be normalized. - random integers between 0 and the size of the categorical (exclusive). - - - - Continuous Univariate Cauchy distribution. - The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see - Wikipedia - Cauchy distribution. - - - - - Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 - - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Initializes a new instance of the class. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - - - - Gets the location (x0) of the distribution. - - - - - Gets the scale (γ) of the distribution. Range: γ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (x0) of the distribution. - The scale (γ) of the distribution. Range: γ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi distribution. - This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal - components are independent and each follow a standard normal distribution. The length of the vector will - then have a chi distribution. - Wikipedia - Chi distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Chi distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Chi distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Chi-Squared distribution. - This distribution is a sum of the squares of k independent standard normal random variables. - Wikipedia - ChiSquare distribution. - - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Initializes a new instance of the class. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - - - - Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ChiSquare distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ChiSquare distribution. - - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a random number from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The degrees of freedom (k) of the distribution. Range: k > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The degrees of freedom (k) of the distribution. Range: k > 0. - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - Generates a sample from the ChiSquare distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sample from the ChiSquare distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The degrees of freedom (k) of the distribution. Range: k > 0. - a sample from the distribution. - - - - Continuous Univariate Uniform distribution. - The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see - Wikipedia - Continuous uniform distribution. - - - - - Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. - - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - If the upper bound is smaller than the lower bound. - - - - Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - If the upper bound is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - the inverse cumulative density at . - - - - - Generates a sample from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Generates a sample from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a uniformly distributed sample. - - - - Generates a sequence of samples from the ContinuousUniform distribution. - - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of uniformly distributed samples. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ upper. - Upper bound. Range: lower ≤ upper. - a sequence of samples from the distribution. - - - - Discrete Univariate Conway-Maxwell-Poisson distribution. - The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli - distributions. It is parameterized by two real numbers "lambda" and "nu". For - - nu = 0 the distribution reverts to a Geometric distribution - nu = 1 the distribution reverts to the Poisson distribution - nu -> infinity the distribution converges to a Bernoulli distribution - - This implementation will cache the value of the normalization constant. - Wikipedia - ConwayMaxwellPoisson distribution. - - - - - The mean of the distribution. - - - - - The variance of the distribution. - - - - - Caches the value of the normalization constant. - - - - - Since many properties of the distribution can only be computed approximately, the tolerance - level specifies how much error we accept. - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Gets the lambda (λ) parameter. Range: λ > 0. - - - - - Gets the rate of decay (ν) parameter. Range: ν ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - the cumulative distribution at location . - - - - - Gets the normalization constant of the Conway-Maxwell-Poisson distribution. - - - - - Computes an approximate normalization constant for the CMP distribution. - - The lambda (λ) parameter for the CMP distribution. - The rate of decay (ν) parameter for the CMP distribution. - - an approximate normalization constant for the CMP distribution. - - - - - Returns one trials from the distribution. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - The z parameter. - - One sample from the distribution implied by , , and . - - - - - Samples a Conway-Maxwell-Poisson distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. - - - a sequence of samples from a Conway-Maxwell-Poisson distribution. - - - - - Samples a random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Samples a sequence of this random variable. - - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter. Range: λ > 0. - The rate of decay (ν) parameter. Range: ν ≥ 0. - - - - Multivariate Dirichlet distribution. For details about this distribution, see - Wikipedia - Dirichlet distribution. - - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - - - - Initializes a new instance of the Dirichlet class. The distribution will - be initialized with the default random number generator. - - An array with the Dirichlet parameters. - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - - - - Initializes a new instance of the class. - random number generator. - The value of each parameter of the Dirichlet distribution. - The dimension of the Dirichlet distribution. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - No parameter can be less than zero and at least one parameter should be larger than zero. - - The parameters of the Dirichlet distribution. - - - - Gets or sets the parameters of the Dirichlet distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the dimension of the Dirichlet distribution. - - - - - Gets the sum of the Dirichlet parameters. - - - - - Gets the mean of the Dirichlet distribution. - - - - - Gets the variance of the Dirichlet distribution. - - - - - Gets the entropy of the distribution. - - - - - Computes the density of the distribution. - - The locations at which to compute the density. - the density at . - The Dirichlet distribution requires that the sum of the components of x equals 1. - You can also leave out the last component, and it will be computed from the others. - - - - Computes the log density of the distribution. - - The locations at which to compute the density. - the density at . - - - - Samples a Dirichlet distributed random vector. - - A sample from this distribution. - - - - Samples a Dirichlet distributed random vector. - - The random number generator to use. - The Dirichlet distribution parameter. - a sample from the distribution. - - - - Discrete Univariate Uniform distribution. - The discrete uniform distribution is a distribution over integers. The distribution - is parameterized by a lower and upper bound (both inclusive). - Wikipedia - Discrete uniform distribution. - - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Initializes a new instance of the DiscreteUniform class. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - - - - Gets the inclusive lower bound of the probability distribution. - - - - - Gets the inclusive upper bound of the probability distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. - - - - - Gets the median of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - the cumulative distribution at location . - - - - - Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. - - The random source to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A random sample from the discrete uniform distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of uniformly distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a uniformly distributed random variable. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - The random number generator to use. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Samples a uniformly distributed random variable. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - A sample from the discrete uniform distribution. - - - - Samples a sequence of uniformly distributed random variables. - - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound, inclusive. Range: lower ≤ upper. - Upper bound, inclusive. Range: lower ≤ upper. - a sequence of samples from the discrete uniform distribution. - - - - Continuous Univariate Erlang distribution. - This distribution is a continuous probability distribution with wide applicability primarily due to its - relation to the exponential and Gamma distributions. - Wikipedia - Erlang distribution. - - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Erlang distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The scale (μ) of the Erlang distribution. Range: μ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. - - - - - Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - - - - - Gets the scale of the Erlang distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum value. - - - - - Gets the Maximum value. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Generates a sample from the Erlang distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Erlang distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Erlang distribution. Range: k ≥ 0. - The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Exponential distribution. - The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. - Wikipedia - exponential distribution. - - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Initializes a new instance of the class. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Exponential distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The random number generator to use. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Draws a random sample from the distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sequence of samples from the Exponential distribution. - - The rate (λ) parameter of the distribution. Range: λ ≥ 0. - a sequence of samples from the distribution. - - - - Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. - For details about this distribution, see - Wikipedia - FisherSnedecor distribution. - - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Initializes a new instance of the class. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. - - - - - Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the FisherSnedecor distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the FisherSnedecor distribution. - - a sequence of samples from the distribution. - - - - Generates one sample from the FisherSnedecor distribution without parameter checking. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a FisherSnedecor distributed random number. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The first degree of freedom (d1) of the distribution. Range: d1 > 0. - The second degree of freedom (d2) of the distribution. Range: d2 > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Gamma distribution. - For details about this distribution, see - Wikipedia - Gamma distribution. - - - The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want - to specify a Gamma distribution which is a point distribution we set the shape parameter to be the - location of the point distribution and the inverse scale as positive infinity. The distribution - with shape and inverse scale both zero is undefined. - - Random number generation for the Gamma distribution is based on the algorithm in: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Initializes a new instance of the Gamma class. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a Gamma distribution from a shape and scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k) of the Gamma distribution. Range: k ≥ 0. - The scale (θ) of the Gamma distribution. Range: θ ≥ 0 - The random number generator which is used to draw random samples. Optional, can be null. - - - - Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will - be initialized with the default random number generator. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. - - - - - Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - - - - - Gets or sets the scale (θ) of the Gamma distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Gamma distribution. - - - - - Gets the variance of the Gamma distribution. - - - - - Gets the standard deviation of the Gamma distribution. - - - - - Gets the entropy of the Gamma distribution. - - - - - Gets the skewness of the Gamma distribution. - - - - - Gets the mode of the Gamma distribution. - - - - - Gets the median of the Gamma distribution. - - - - - Gets the minimum of the Gamma distribution. - - - - - Gets the maximum of the Gamma distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Gamma distribution. - - a sequence of samples from the distribution. - - - - Sampling implementation based on: - "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang - ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. - This method performs no parameter checks. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - A sample from a Gamma distributed random variable. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - the inverse cumulative density at . - - - - - Generates a sample from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The random number generator to use. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Gamma distribution. - - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k, α) of the Gamma distribution. Range: α ≥ 0. - The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Geometric distribution. - The Geometric distribution is a distribution over positive integers parameterized by one positive real number. - This implementation of the Geometric distribution will never generate 0's. - Wikipedia - geometric distribution. - - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the Geometric class. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - A that represents this instance. - - - - Tests whether the provided values are valid parameters for this distribution. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - Throws a not supported exception. - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Returns one sample from the distribution. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - One sample from the distribution implied by . - - - - Samples a Geometric distributed random variable. - - A sample from the Geometric distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Geometric distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The probability (p) of generating one. Range: 0 ≤ p ≤ 1. - - - - Discrete Univariate Hypergeometric distribution. - This distribution is a discrete probability distribution that describes the number of successes in a sequence - of n draws from a finite population without replacement, just as the binomial distribution - describes the number of successes for draws with replacement - Wikipedia - Hypergeometric distribution. - - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Initializes a new instance of the Hypergeometric class. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the size of the population (N). - - - - - Gets the number of draws without replacement (n). - - - - - Gets the number successes within the population (K, M). - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - the cumulative distribution at location . - - - - - Generates a sample from the Hypergeometric distribution without doing parameter checking. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The n parameter of the distribution. - a random number from the Hypergeometric distribution. - - - - Samples a Hypergeometric distributed random variable. - - The number of successes in n trials. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Hypergeometric distributed random variables. - - a sequence of successes in n trials. - - - - Samples a random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The random number generator to use. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Samples a sequence of this random variable. - - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The size of the population (N). - The number successes within the population (K, M). - The number of draws without replacement (n). - - - - Continuous Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by a double. - - - - - Gets the largest element in the domain of the distribution which can be represented by a double. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Discrete Univariate Probability Distribution. - - - - - - Gets the mode of the distribution. - - - - - Gets the smallest element in the domain of the distribution which can be represented by an integer. - - - - - Gets the largest element in the domain of the distribution which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Draws a sequence of random samples from the distribution. - - an infinite sequence of samples from the distribution. - - - - Probability Distribution. - - - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Continuous Univariate Inverse Gamma distribution. - The inverse Gamma distribution is a distribution over the positive real numbers parameterized by - two positive parameters. - Wikipedia - InverseGamma distribution. - - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Initializes a new instance of the class. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - - - - Gets or sets the shape (α) parameter. Range: α > 0. - - - - - Gets or sets The scale (β) parameter. Range: β > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Throws . - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Cauchy distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (α) of the distribution. Range: α > 0. - The scale (β) of the distribution. Range: β > 0. - a sequence of samples from the distribution. - - - - Gets the mean (μ) of the distribution. Range: μ > 0. - - - - - Gets the shape (λ) of the distribution. Range: λ > 0. - - - - - Initializes a new instance of the InverseGaussian class. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Inverse Gaussian distribution. - - - - - Gets the variance of the Inverse Gaussian distribution. - - - - - Gets the standard deviation of the Inverse Gaussian distribution. - - - - - Gets the median of the Inverse Gaussian distribution. - No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. - - - - - Gets the minimum of the Inverse Gaussian distribution. - - - - - Gets the maximum of the Inverse Gaussian distribution. - - - - - Gets the skewness of the Inverse Gaussian distribution. - - - - - Gets the kurtosis of the Inverse Gaussian distribution. - - - - - Gets the mode of the Inverse Gaussian distribution. - - - - - Gets the entropy of the Inverse Gaussian distribution (currently not supported). - - - - - Generates a sample from the inverse Gaussian distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the inverse Gaussian distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the inverse Gaussian distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - - - - Generates a sequence of samples from the Burr distribution. - - The random number generator to use. - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The mean (μ) of the distribution. Range: μ > 0. - The shape (λ) of the distribution. Range: λ > 0. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - An Inverse Gaussian distribution. - - - - Multivariate Inverse Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution - is the conjugate prior for the covariance matrix of a multivariate normal distribution. - Wikipedia - Inverse-Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Initializes a new instance of the class. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - - - - Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. - - - - - Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. - - - - Gets the variance of the distribution. - - The variance of the distribution. - Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. - - - - Evaluates the probability density function for the inverse Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - a sample from the distribution. - - - - Samples an inverse Wishart distributed random variable by sampling - a Wishart random variable and inverting the matrix. - - The random number generator to use. - The degree of freedom (ν) for the inverse Wishart distribution. - The scale matrix (Ψ) for the inverse Wishart distribution. - a sample from the distribution. - - - - Univariate Probability Distribution. - - - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the median of the distribution. - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Continuous Univariate Laplace distribution. - The Laplace distribution is a distribution over the real numbers parameterized by a mean and - scale parameter. The PDF is: - p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. - Wikipedia - Laplace distribution. - - - - - Initializes a new instance of the class (location = 0, scale = 1). - - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - If is negative. - - - - Initializes a new instance of the class. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - - - - Gets the location (μ) of the Laplace distribution. - - - - - Gets the scale (b) of the Laplace distribution. Range: b > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Samples a Laplace distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sample from the Laplace distribution. - - a sample from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (b) of the distribution. Range: b > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Log-Normal distribution. - For details about this distribution, see - Wikipedia - Log-Normal distribution. - - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the logarithm of the distribution. - The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the class. - The distribution will be initialized with the default - random number generator. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a log-normal distribution with the desired mu and sigma parameters. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Constructs a log-normal distribution with the desired mean and variance. - - The mean of the log-normal distribution. - The variance of the log-normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - - - - Estimates the log-normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A log-normal distribution. - MATLAB: lognfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - - - - Gets the log-scale (μ) (mean of the logarithm) of the distribution. - - - - - Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mu of the log-normal distribution. - - - - - Gets the variance of the log-normal distribution. - - - - - Gets the standard deviation of the log-normal distribution. - - - - - Gets the entropy of the log-normal distribution. - - - - - Gets the skewness of the log-normal distribution. - - - - - Gets the mode of the log-normal distribution. - - - - - Gets the median of the log-normal distribution. - - - - - Gets the minimum of the log-normal distribution. - - - - - Gets the maximum of the log-normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the density at . - - MATLAB: lognpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: logncdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: logninv - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. - - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The log-scale (μ) of the distribution. - The shape (σ) of the distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Multivariate Matrix-valued Normal distributions. The distribution - is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix - for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. - Wikipedia - MatrixNormal distribution. - - - - - The mean of the matrix normal distribution. - - - - - The covariance matrix for the rows. - - - - - The covariance matrix for the columns. - - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - - - - Initializes a new instance of the class. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - The random number generator which is used to draw random samples. - If the dimensions of the mean and two covariance matrices don't match. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - - - - Gets the mean. (M) - - The mean of the distribution. - - - - Gets the row covariance. (V) - - The row covariance. - - - - Gets the column covariance. (K) - - The column covariance. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Evaluates the probability density function for the matrix normal distribution. - - The matrix at which to evaluate the density at. - the density at - If the argument does not have the correct dimensions. - - - - Samples a matrix normal distributed random variable. - - A random number from this distribution. - - - - Samples a matrix normal distributed random variable. - - The random number generator to use. - The mean of the matrix normal. - The covariance matrix for the rows. - The covariance matrix for the columns. - If the dimensions of the mean and two covariance matrices don't match. - a sequence of samples from the distribution. - - - - Samples a vector normal distributed random variable. - - The random number generator to use. - The mean of the vector normal distribution. - The covariance matrix of the vector normal distribution. - a sequence of samples from defined distribution. - - - - Multivariate Multinomial distribution. For details about this distribution, see - Wikipedia - Multinomial distribution. - - - The distribution is parameterized by a vector of ratios: in other words, the parameter - does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized - to sum to 1 in floating point representation. - - - - - Stores the normalized multinomial probabilities. - - - - - The number of trials. - - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - The random number generator which is used to draw random samples. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - Initializes a new instance of the Multinomial class from histogram . The distribution will - not be automatically updated when the histogram changes. - - Histogram instance - The number of trials. - If any of the probabilities are negative or do not sum to one. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - If any of the probabilities are negative returns false, - if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. - - - - Gets the proportion of ratios. - - - - - Gets the number of trials. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Computes values of the probability mass function. - - Non-negative integers x1, ..., xk - The probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Computes values of the log probability mass function. - - Non-negative integers x1, ..., xk - The log probability mass at location . - When is null. - When length of is not equal to event probabilities count. - - - - Samples one multinomial distributed random variable. - - the counts for each of the different possible values. - - - - Samples a sequence multinomially distributed random variables. - - a sequence of counts for each of the different possible values. - - - - Samples one multinomial distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of trials. - the counts for each of the different possible values. - - - - Samples a multinomially distributed random variable. - - The random number generator to use. - An array of nonnegative ratios: this array does not need to be normalized - as this is often impossible using floating point arithmetic. - The number of variables needed. - a sequence of counts for each of the different possible values. - - - - Discrete Univariate Negative Binomial distribution. - The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special - case that r is an integer one can interpret the distribution as the number of failures before the r'th success - when the probability of success is p. - Wikipedia - NegativeBinomial distribution. - - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Initializes a new instance of the class. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - The random number generator which is used to draw random samples. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Gets the number of successes. Range: r ≥ 0. - - - - - Gets the probability of success. Range: 0 ≤ p ≤ 1. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - the cumulative distribution at location . - - - - - Samples a negative binomial distributed random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - a sample from the distribution. - - - - Samples a NegativeBinomial distributed random variable. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of NegativeBinomial distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Samples a sequence of this random variable. - - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The number of successes (r) required to stop the experiment. Range: r ≥ 0. - The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. - - - - Continuous Univariate Normal distribution, also known as Gaussian distribution. - For details about this distribution, see - Wikipedia - Normal distribution. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - - - - Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 - and standard deviation 1.0. The distribution will - be initialized with the default random number generator. - - The random number generator which is used to draw random samples. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will - be initialized with the default random number generator. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. - - - - Constructs a normal distribution from a mean and standard deviation. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The random number generator which is used to draw random samples. Optional, can be null. - a normal distribution. - - - - Constructs a normal distribution from a mean and variance. - - The mean (μ) of the normal distribution. - The variance (σ^2) of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Constructs a normal distribution from a mean and precision. - - The mean (μ) of the normal distribution. - The precision of the normal distribution. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - - - - Estimates the normal distribution parameters from sample data with maximum-likelihood. - - The samples to estimate the distribution parameters from. - The random number generator which is used to draw random samples. Optional, can be null. - A normal distribution. - MATLAB: normfit - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - Gets the mean (μ) of the normal distribution. - - - - - Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - - - - - Gets the variance of the normal distribution. - - - - - Gets the precision of the normal distribution. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the entropy of the normal distribution. - - - - - Gets the skewness of the normal distribution. - - - - - Gets the mode of the normal distribution. - - - - - Gets the median of the normal distribution. - - - - - Gets the minimum of the normal distribution. - - - - - Gets the maximum of the normal distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the density at . - - MATLAB: normpdf - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the cumulative distribution at location . - - MATLAB: normcdf - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - the inverse cumulative density at . - - MATLAB: norminv - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The random number generator to use. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Generates a sample from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sample from the distribution. - - - - Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. - - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The mean (μ) of the normal distribution. - The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. - a sequence of samples from the distribution. - - - - This structure represents the type over which the distribution - is defined. - - - - - Initializes a new instance of the struct. - - The mean of the pair. - The precision of the pair. - - - - Gets or sets the mean of the pair. - - - - - Gets or sets the precision of the pair. - - - - - Multivariate Normal-Gamma Distribution. - The distribution is the conjugate prior distribution for the - distribution. It specifies a prior over the mean and precision of the distribution. - It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the - precision inverse scale. - The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). - The following degenerate cases are special: when the precision is known, - the precision shape will encode the value of the precision while the precision inverse scale is positive - infinity. When the mean is known, the mean location will encode the value of the mean while the scale - will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. - Wikipedia - Normal-Gamma distribution. - - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Initializes a new instance of the class. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - - - - Gets the location of the mean. - - - - - Gets the scale of the mean. - - - - - Gets the shape of the precision. - - - - - Gets the inverse scale of the precision. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Returns the marginal distribution for the mean of the NormalGamma distribution. - - the marginal distribution for the mean of the NormalGamma distribution. - - - - Returns the marginal distribution for the precision of the distribution. - - The marginal distribution for the precision of the distribution/ - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the variance of the distribution. - - The mean of the distribution. - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - Density value - - - - Evaluates the probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - Density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean/precision pair of the distribution - The log of the density value - - - - Evaluates the log probability density function for a NormalGamma distribution. - - The mean of the distribution - The precision of the distribution - The log of the density value - - - - Generates a sample from the NormalGamma distribution. - - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - a sequence of samples from the distribution. - - - - Generates a sample from the NormalGamma distribution. - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sample from the distribution. - - - - Generates a sequence of samples from the NormalGamma distribution - - The random number generator to use. - The location of the mean. - The scale of the mean. - The shape of the precision. - The inverse scale of the precision. - a sequence of samples from the distribution. - - - - Continuous Univariate Pareto distribution. - The Pareto distribution is a power law probability distribution that coincides with social, - scientific, geophysical, actuarial, and many other types of observable phenomena. - For details about this distribution, see - Wikipedia - Pareto distribution. - - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - If or are negative. - - - - Initializes a new instance of the class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The random number generator which is used to draw random samples. - If or are negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Pareto distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - a sequence of samples from the distribution. - - - - Discrete Univariate Poisson distribution. - - - Distribution is described at Wikipedia - Poisson distribution. - Knuth's method is used to generate Poisson distributed random variables. - f(x) = exp(-λ)*λ^x/x!; - - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - If is equal or less then 0.0. - - - - Initializes a new instance of the class. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - If is equal or less then 0.0. - - - - Returns a that represents this instance. - - - A that represents this instance. - - - - - Tests whether the provided values are valid parameters for this distribution. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - - - - Gets the Poisson distribution parameter λ. Range: λ > 0. - - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Gets the skewness of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - Approximation, see Wikipedia Poisson distribution - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Generates one sample from the Poisson distribution. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by Knuth's method. - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - - - - Generates one sample from the Poisson distribution by "Rejection method PA". - - The random source to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A random sample from the Poisson distribution. - "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, - Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) - The article is on pages 29-35. The algorithm given here is on page 32. - - - - Samples a Poisson distributed random variable. - - A sample from the Poisson distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of Poisson distributed random variables. - - a sequence of successes in N trials. - - - - Samples a Poisson distributed random variable. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The random number generator to use. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Samples a Poisson distributed random variable. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - A sample from the Poisson distribution. - - - - Samples a sequence of Poisson distributed random variables. - - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Rayleigh distribution. - The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an - example of how it arises, the wind speed will have a Rayleigh distribution if the components of - the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. - For details about this distribution, see - Wikipedia - Rayleigh distribution. - - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - If is negative. - - - - Initializes a new instance of the class. - - The scale (σ) of the distribution. Range: σ > 0. - The random number generator which is used to draw random samples. - If is negative. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (σ) of the distribution. Range: σ > 0. - - - - Gets the scale (σ) of the distribution. Range: σ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Rayleigh distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (σ) of the distribution. Range: σ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The scale (σ) of the distribution. Range: σ > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The scale (σ) of the distribution. Range: σ > 0. - the inverse cumulative density at . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The scale (σ) of the distribution. Range: σ > 0. - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized Error Distribution (SGED). - Implements the univariate SSkewed Generalized Error Distribution. For details about this - distribution, see - - Wikipedia - Generalized Error Distribution. - It includes Laplace, Normal and Student-t distributions. - This is the distribution with q=Inf. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution - with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Generates a sample from the Skew Generalized Error distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized Error distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized Error distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - Parameter that controls kurtosis. Range: p > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Skewed Generalized T-distribution. - Implements the univariate Skewed Generalized t-distribution. For details about this - distribution, see - - Wikipedia - Skewed generalized t-distribution. - The skewed generalized t-distribution contains many different distributions within it - as special cases based on the parameterization chosen. - - This implementation is based on the R package dsgt and corresponding viginette, see - https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that - implementation, the options for mean adjustment and variance adjustment are always true. - The location (μ) is the mean of the distribution. - The scale (σ) squared is the variance of the distribution. - - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. - - - - Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution - with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). - - - - - Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew - and kurtosis parameters. Different parameterizations result in different distributions. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Given a parameter set, returns the distribution that matches this parameterization. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - Null if no known distribution matches the parameterization, else the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - - - - Gets the location (μ) of the Skewed Generalized t-distribution. - - - - - Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. - - - - - Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. - - - - - Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. - - - - - Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - The location at which to compute the density. - the density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - the inverse cumulative density at . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Skew Generalized t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Generates a sample from the Skew Generalized t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sample from the distribution. - - - - Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Fills an array with samples from the Skew Generalized t-distribution using inverse transform. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The skew, 1 > λ > -1 - First parameter that controls kurtosis. Range: p > 0 - Second parameter that controls kurtosis. Range: q > 0 - a sequence of samples from the distribution. - - - - Continuous Univariate Stable distribution. - A random variable is said to be stable (or to have a stable distribution) if it has - the property that a linear combination of two independent copies of the variable has - the same distribution, up to location and scale parameters. - For details about this distribution, see - Wikipedia - Stable distribution. - - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Initializes a new instance of the class. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - - - - Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. - - - - - Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - - - - - Gets the scale (c) of the distribution. Range: c > 0. - - - - - Gets the location (μ) of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets he entropy of the distribution. - - Always throws a not supported exception. - - - - Gets the skewness of the distribution. - - Throws a not supported exception of Alpha != 2. - - - - Gets the mode of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the median of the distribution. - - Throws a not supported exception if Beta != 0. - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) - - - - Samples the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a random number from the distribution. - - - - Draws a random sample from the distribution. - - A random number from this distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Stable distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The random number generator to use. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Generates a sample from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sample from the distribution. - - - - Generates a sequence of samples from the distribution. - - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The stability (α) of the distribution. Range: 2 ≥ α > 0. - The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. - The scale (c) of the distribution. Range: c > 0. - The location (μ) of the distribution. - a sequence of samples from the distribution. - - - - Continuous Univariate Student's T-distribution. - Implements the univariate Student t-distribution. For details about this - distribution, see - - Wikipedia - Student's t-distribution. - - We use a slightly generalized version (compared to - Wikipedia) of the Student t-distribution. Namely, one which also - parameterizes the location and scale. See the book "Bayesian Data - Analysis" by Gelman et al. for more details. - The density of the Student t-distribution p(x|mu,scale,dof) = - Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / - (Gamma(dof/2)*Sqrt(dof*pi*scale)). - The distribution will use the by - default. Users can get/set the random number generator by using the - property. - The statistics classes will check all the incoming parameters - whether they are in the allowed range. This might involve heavy - computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 - scale 1.0 and degrees of freedom 1. - - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Initializes a new instance of the StudentT class with a particular location, scale and degrees of - freedom. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - - - - Gets the location (μ) of the Student t-distribution. - - - - - Gets the scale (σ) of the Student t-distribution. Range: σ > 0. - - - - - Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Student t-distribution. - - - - - Gets the variance of the Student t-distribution. - - - - - Gets the standard deviation of the Student t-distribution. - - - - - Gets the entropy of the Student t-distribution. - - - - - Gets the skewness of the Student t-distribution. - - - - - Gets the mode of the Student t-distribution. - - - - - Gets the median of the Student t-distribution. - - - - - Gets the minimum of the Student t-distribution. - - - - - Gets the maximum of the Student t-distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Samples student-t distributed random variables. - - The algorithm is method 2 in section 5, chapter 9 - in L. Devroye's "Non-Uniform Random Variate Generation" - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a random number from the standard student-t distribution. - - - - Generates a sample from the Student t-distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Student t-distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - the inverse cumulative density at . - - WARNING: currently not an explicit implementation, hence slow and unreliable. - - - - Generates a sample from the Student t-distribution. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The random number generator to use. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Student t-distribution. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. - - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The location (μ) of the distribution. - The scale (σ) of the distribution. Range: σ > 0. - The degrees of freedom (ν) for the distribution. Range: ν > 0. - a sequence of samples from the distribution. - - - - Triangular distribution. - For details, see Wikipedia - Triangular distribution. - - The distribution will use the by default. - Users can get/set the random number generator by using the property. - The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters - to false, all parameter checks can be turned off. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The random number generator which is used to draw random samples. - If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - - - - Gets the lower bound of the distribution. - - - - - Gets the upper bound of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - - Gets the skewness of the distribution. - - - - - Gets or sets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - - Gets the minimum of the distribution. - - - - - Gets the maximum of the distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Triangular distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the cumulative distribution at location . - - - - - Computes the inverse of the cumulative distribution function (InvCDF) for the distribution - at the given probability. This is also known as the quantile or percent point function. - - The location at which to compute the inverse cumulative density. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - the inverse cumulative density at . - - - - - Generates a sample from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - The random number generator to use. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Generates a sample from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sample from the distribution. - - - - Generates a sequence of samples from the Triangular distribution. - - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - Lower bound. Range: lower ≤ mode ≤ upper - Upper bound. Range: lower ≤ mode ≤ upper - Mode (most frequent value). Range: lower ≤ mode ≤ upper - a sequence of samples from the distribution. - - - - Initializes a new instance of the TruncatedPareto class. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The random number generator which is used to draw random samples. - If or are non-positive or if T ≤ xm. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Gets the random number generator which is used to draw random samples. - - - - - Gets the scale (xm) of the distribution. Range: xm > 0. - - - - - Gets the shape (α) of the distribution. Range: α > 0. - - - - - Gets the truncation (T) of the distribution. Range: T > 0. - - - - - Gets the n-th raw moment of the distribution. - - The order (n) of the moment. Range: n ≥ 1. - the n-th moment of the distribution. - - - - Gets the mean of the truncated Pareto distribution. - - - - - Gets the variance of the truncated Pareto distribution. - - - - - Gets the standard deviation of the truncated Pareto distribution. - - - - - Gets the mode of the truncated Pareto distribution (not supported). - - - - - Gets the minimum of the truncated Pareto distribution. - - - - - Gets the maximum of the truncated Pareto distribution. - - - - - Gets the entropy of the truncated Pareto distribution (not supported). - - - - - Gets the skewness of the truncated Pareto distribution. - - - - - Gets the median of the truncated Pareto distribution. - - - - - Generates a sample from the truncated Pareto distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - a sequence of samples from the distribution. - - - - Generates a sample from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - - - - Generates a sequence of samples from the truncated Pareto distribution. - - The random number generator to use. - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the inverse cumulative distribution function. - the inverse cumulative distribution at location . - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the log density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The scale (xm) of the distribution. Range: xm > 0. - The shape (α) of the distribution. Range: α > 0. - The truncation (T) of the distribution. Range: T > xm. - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - - Continuous Univariate Weibull distribution. - For details about this distribution, see - Wikipedia - Weibull distribution. - - - The Weibull distribution is parametrized by a shape and scale parameter. - - - - - Reusable intermediate result 1 / (_scale ^ _shape) - - - By caching this parameter we can get slightly better numerics precision - in certain constellations without any additional computations. - - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Initializes a new instance of the Weibull class. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - - - - Gets the shape (k) of the Weibull distribution. Range: k > 0. - - - - - Gets the scale (λ) of the Weibull distribution. Range: λ > 0. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the Weibull distribution. - - - - - Gets the variance of the Weibull distribution. - - - - - Gets the standard deviation of the Weibull distribution. - - - - - Gets the entropy of the Weibull distribution. - - - - - Gets the skewness of the Weibull distribution. - - - - - Gets the mode of the Weibull distribution. - - - - - Gets the median of the Weibull distribution. - - - - - Gets the minimum of the Weibull distribution. - - - - - Gets the maximum of the Weibull distribution. - - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The location at which to compute the density. - the density at . - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The location at which to compute the log density. - the log density at . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Generates a sample from the Weibull distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Generates a sequence of samples from the Weibull distribution. - - a sequence of samples from the distribution. - - - - Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the density at . - - - - - Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - The location at which to compute the density. - the log density at . - - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - the cumulative distribution at location . - - - - - Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos - - - - Returns a Weibull distribution. - - - - Generates a sample from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The random number generator to use. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Generates a sample from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sample from the distribution. - - - - Generates a sequence of samples from the Weibull distribution. - - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The shape (k) of the Weibull distribution. Range: k > 0. - The scale (λ) of the Weibull distribution. Range: λ > 0. - a sequence of samples from the distribution. - - - - Multivariate Wishart distribution. This distribution is - parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution - is the conjugate prior for the precision (inverse covariance) matrix of the multivariate - normal distribution. - Wikipedia - Wishart distribution. - - - - - The degrees of freedom for the Wishart distribution. - - - - - The scale matrix for the Wishart distribution. - - - - - Caches the Cholesky factorization of the scale matrix. - - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Initializes a new instance of the class. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The random number generator which is used to draw random samples. - - - - Tests whether the provided values are valid parameters for this distribution. - - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - - - - Gets or sets the degrees of freedom (n) for the Wishart distribution. - - - - - Gets or sets the scale matrix (V) for the Wishart distribution. - - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - The mean of the distribution. - - - - Gets the mode of the distribution. - - The mode of the distribution. - - - - Gets the variance of the distribution. - - The variance of the distribution. - - - - Evaluates the probability density function for the Wishart distribution. - - The matrix at which to evaluate the density at. - If the argument does not have the same dimensions as the scale matrix. - the density at . - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - A random number from this distribution. - - - - Samples a Wishart distributed random variable using the method - Algorithm AS 53: Wishart Variate Generator - W. B. Smith and R. R. Hocking - Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - a sequence of samples from the distribution. - - - - Samples the distribution. - - The random number generator to use. - The degrees of freedom (n) for the Wishart distribution. - The scale matrix (V) for the Wishart distribution. - The cholesky decomposition to use. - a random number from the distribution. - - - - Discrete Univariate Zipf distribution. - Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact - that many types of data studied in the physical and social sciences can be approximated with - a Zipfian distribution, one of a family of related discrete power law probability distributions. - For details about this distribution, see - Wikipedia - Zipf distribution. - - - - - The s parameter of the distribution. - - - - - The n parameter of the distribution. - - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Initializes a new instance of the class. - - The s parameter of the distribution. - The n parameter of the distribution. - The random number generator which is used to draw random samples. - - - - A string representation of the distribution. - - a string representation of the distribution. - - - - Tests whether the provided values are valid parameters for this distribution. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Gets or sets the s parameter of the distribution. - - - - - Gets or sets the n parameter of the distribution. - - - - - Gets or sets the random number generator which is used to draw random samples. - - - - - Gets the mean of the distribution. - - - - - Gets the variance of the distribution. - - - - - Gets the standard deviation of the distribution. - - - - - Gets the entropy of the distribution. - - - - - Gets the skewness of the distribution. - - - - - Gets the mode of the distribution. - - - - - Gets the median of the distribution. - - - - - Gets the smallest element in the domain of the distributions which can be represented by an integer. - - - - - Gets the largest element in the domain of the distributions which can be represented by an integer. - - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - the cumulative distribution at location . - - - - Computes the probability mass (PMF) at k, i.e. P(X = k). - - The location in the domain where we want to evaluate the probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the probability mass at location . - - - - Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). - - The location in the domain where we want to evaluate the log probability mass function. - The s parameter of the distribution. - The n parameter of the distribution. - the log probability mass at location . - - - - Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). - - The location at which to compute the cumulative distribution function. - The s parameter of the distribution. - The n parameter of the distribution. - the cumulative distribution at location . - - - - - Generates a sample from the Zipf distribution without doing parameter checking. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - a random number from the Zipf distribution. - - - - Draws a random sample from the distribution. - - a sample from the distribution. - - - - Fills an array with samples generated from the distribution. - - - - - Samples an array of zipf distributed random variables. - - a sequence of samples from the distribution. - - - - Samples a random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The random number generator to use. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The random number generator to use. - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Samples a sequence of this random variable. - - The s parameter of the distribution. - The n parameter of the distribution. - - - - Fills an array with samples generated from the distribution. - - The array to fill with the samples. - The s parameter of the distribution. - The n parameter of the distribution. - - - - Integer number theory functions. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Canonical Modulus. The result has the sign of the divisor. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Remainder (% operator). The result has the sign of the dividend. - - - - - Find out whether the provided 32 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 64 bit integer is an even number. - - The number to very whether it's even. - True if and only if it is an even number. - - - - Find out whether the provided 32 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 64 bit integer is an odd number. - - The number to very whether it's odd. - True if and only if it is an odd number. - - - - Find out whether the provided 32 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 64 bit integer is a perfect power of two. - - The number to very whether it's a power of two. - True if and only if it is a power of two. - - - - Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. - - The number to very whether it's a perfect square. - True if and only if it is a perfect square. - - - - Raises 2 to the provided integer exponent (0 <= exponent < 31). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Raises 2 to the provided integer exponent (0 <= exponent < 63). - - The exponent to raise 2 up to. - 2 ^ exponent. - - - - - Evaluate the binary logarithm of an integer number. - - Two-step method using a De Bruijn-like sequence table lookup. - - - - Find the closest perfect power of two that is larger or equal to the provided - 32 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Find the closest perfect power of two that is larger or equal to the provided - 64 bit integer. - - The number of which to find the closest upper power of two. - A power of two. - - - - - Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's - algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two integers using Euclid's algorithm. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the greatest common divisor (gcd) of two big integers. - - First Integer: a. - Second Integer: b. - Greatest common divisor gcd(a,b) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Returns the greatest common divisor (gcd) of a set of big integers. - - List of Integers. - Greatest common divisor gcd(list of integers) - - - - Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). - - First Integer: a. - Second Integer: b. - Resulting x, such that a*x + b*y = gcd(a,b). - Resulting y, such that a*x + b*y = gcd(a,b) - Greatest common divisor gcd(a,b) - - - long x,y,d; - d = Fn.GreatestCommonDivisor(45,18,out x, out y); - -> d == 9 && x == 1 && y == -2 - - The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. - - - - - Returns the least common multiple (lcm) of two big integers. - - First Integer: a. - Second Integer: b. - Least common multiple lcm(a,b) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Returns the least common multiple (lcm) of a set of big integers. - - List of Integers. - Least common multiple lcm(list of integers) - - - - Collection of functions equivalent to those provided by Microsoft Excel - but backed instead by Math.NET Numerics. - We do not recommend to use them except in an intermediate phase when - porting over solutions previously implemented in Excel. - - - - - An algorithm failed to converge. - - - - - An algorithm failed to converge due to a numerical breakdown. - - - - - An error occurred calling native provider function. - - - - - An error occurred calling native provider function. - - - - - Native provider was unable to allocate sufficient memory. - - - - - Native provider failed LU inversion do to a singular U matrix. - - - - - Compound Monthly Return or Geometric Return or Annualized Return - - - - - Average Gain or Gain Mean - This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) - and then dividing the total by the number of gain periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Average Loss or LossMean - This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) - and then dividing the total by the number of loss periods. - - http://www.offshore-library.com/kb/statistics.php - - - - Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain - and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. - © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. - - - - - Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then - measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. - - http://www.offshore-library.com/kb/statistics.php - - - - This measure is similar to the loss standard deviation except the downside deviation - considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. - For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below - 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for - the losing periods, and then measure the variation between each losing return and the losing return average). - - - - - A measure of volatility in returns below the mean. It's similar to standard deviation, but it only - looks at periods where the investment return was less than average return. - - - - - Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing - period. Periods can be monthly or quarterly depending on the data frequency. - - - - - Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - The missing gradient is evaluated numerically (forward difference). - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. - For more options and diagnostics consider to use directly. - An alternative routine using conjugate gradients (CG) is available in . - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - - Find vector x that minimizes the function f(x) using the Newton algorithm. - For more options and diagnostics consider to use directly. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. - Maximum number of iterations. Example: 100. - - - - Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" - The roots of the polynomial - - - - Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix - - The polynomial. - The roots of the polynomial - - - - Find all roots of the Chebychev polynomial of the first kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) - - - - Find all roots of the Chebychev polynomial of the second kind. - - The polynomial order and therefore the number of roots. - The real domain interval begin where to start sampling. - The real domain interval end where to stop sampling. - Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) - - - - Least-Squares Curve Fitting Routines - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as [a, b] array, - where a is the intercept and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - - - - Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning its best fitting parameters as (a, r) tuple. - - - - - Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning its best fitting parameters as (a, b) tuple. - - - - - Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, - returning a function y' for the best fitting line. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning a function y' for the best fitting polynomial. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. - A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning a function y' for the best fitting combination. - If an intercept is added, its coefficient will be prepended to the resulting parameters. - - - - - Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning its best fitting parameters as [p0, p1, p2, ..., pk] array. - - - - - Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), - returning a function y' for the best fitting combination. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning its best fitting parameter p. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning its best fitting parameter p0 and p1. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning its best fitting parameter p0, p1 and p2. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), - returning a function y' for the best fitting curve. - - - - - Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), - returning a function y' for the best fitting curve. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate samples by sampling a function at the provided points. - - - - - Generate a sample sequence by sampling a function at the provided point sequence. - - - - - Generate a linearly spaced sample vector of the given length between the specified values (inclusive). - Equivalent to MATLAB linspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). - - - - - Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). - Equivalent to MATLAB logspace but with the length as first instead of last argument. - - - - - Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. - Equivalent to MATLAB colon operator (:). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - Equivalent to MATLAB double colon operator (::). - - - - - Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. - The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. - - - - - Create a periodic wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic wave. - - The number of samples to generate. - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite periodic wave sequence. - - The function to apply to each of the values and evaluate the resulting sample. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a Sine wave. - - The number of samples to generate. - Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. - Frequency in periods per time unit (Hz). - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create an infinite Sine wave sequence. - - Samples per unit. - Frequency in samples per unit. - The maximal reached peak. - The mean, or DC part, of the signal. - Optional phase offset. - Optional delay, relative to the phase. - - - - Create a periodic square wave, starting with the high phase. - - The number of samples to generate. - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create an infinite periodic square wave sequence, starting with the high phase. - - Number of samples of the high phase. - Number of samples of the low phase. - Sample value to be emitted during the low phase. - Sample value to be emitted during the high phase. - Optional delay. - - - - Create a periodic triangle wave, starting with the raise phase from the lowest sample. - - The number of samples to generate. - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. - - Number of samples of the raise phase. - Number of samples of the fall phase. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create a periodic sawtooth wave, starting with the lowest sample. - - The number of samples to generate. - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. - - Number of samples a full sawtooth period. - Lowest sample value. - Highest sample value. - Optional delay. - - - - Create an array with each field set to the same value. - - The number of samples to generate. - The value that each field should be set to. - - - - Create an infinite sequence where each element has the same value. - - The value that each element should be set to. - - - - Create a Heaviside Step sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. - - - - Create an infinite Heaviside Step sample sequence. - - The maximal reached peak. - Offset to the time axis. - - - - Create a Kronecker Delta impulse sample vector. - - The number of samples to generate. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - The maximal reached peak. - Offset to the time axis, hence the sample index of the impulse. - - - - Create a periodic Kronecker Delta impulse sample vector. - - The number of samples to generate. - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Create a Kronecker Delta impulse sample vector. - - impulse sequence period. - The maximal reached peak. - Offset to the time axis. Zero or positive. - - - - Generate samples generated by the given computation. - - - - - Generate an infinite sequence generated by the given computation. - - - - - Generate a Fibonacci sequence, including zero as first value. - - - - - Generate an infinite Fibonacci sequence, including zero as first value. - - - - - Create random samples, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create an infinite random sample sequence, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. - Faster than other methods but with reduced guarantees on randomness. - - - - - Create samples with independent amplitudes of standard distribution. - - - - - Create an infinite sample sequence with independent amplitudes of standard distribution. - - - - - Create samples with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Create random samples. - - - - - Create an infinite random sample sequence. - - - - - Generate samples by sampling a function at samples from a probability distribution. - - - - - Generate a sample sequence by sampling a function at samples from a probability distribution. - - - - - Generate samples by sampling a function at sample pairs from a probability distribution. - - - - - Generate a sample sequence by sampling a function at sample pairs from a probability distribution. - - - - - Globalized String Handling Helpers - - - - - Tries to get a from the format provider, - returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format - provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Tries to get a from the format provider, returning the current culture if it fails. - - - An that supplies culture-specific - formatting information. - - A instance. - - - - Globalized Parsing: Tokenize a node by splitting it into several nodes. - - Node that contains the trimmed string to be tokenized. - List of keywords to tokenize by. - keywords to skip looking for (because they've already been handled). - - - - Globalized Parsing: Parse a double number - - First token of the number. - Culture Info. - The parsed double number using the given culture information. - - - - - Globalized Parsing: Parse a float number - - First token of the number. - Culture Info. - The parsed float number using the given culture information. - - - - - Calculates r^2, the square of the sample correlation coefficient between - the observed outcomes and the observed predictor values. - Not to be confused with R^2, the coefficient of determination, see . - - The modelled/predicted values - The observed/actual values - Squared Person product-momentum correlation coefficient. - - - - Calculates r, the sample correlation coefficient between the observed outcomes - and the observed predictor values. - - The modelled/predicted values - The observed/actual values - Person product-momentum correlation coefficient. - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The Standard Error of the regression - - - - Calculates the Standard Error of the regression, given a sequence of - modeled/predicted values, and a sequence of actual/observed values - - The modelled/predicted values - The observed/actual values - The degrees of freedom by which the - number of samples is reduced for performing the Standard Error calculation - The Standard Error of the regression - - - - Calculates the R-Squared value, also known as coefficient of determination, - given some modelled and observed values. - - The values expected from the model. - The actual values obtained. - Coefficient of determination. - - - - Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). - - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the FFT is evaluated in place. - Imaginary part of the sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed from the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. - - Sample data, where the FFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. - - Sample data, organized row by row, where the FFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the FFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Spectrum data, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - - Real part of the sample vector, where the iFFT is evaluated in place. - Imaginary part of the sample vector, where the iFFT is evaluated in place. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. - Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), - the spectrum can be fully reconstructed form the positive frequencies only (first half). - The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. - - Data array of length N+2 (if N is even) or N+1 (if N is odd). - The number of samples. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. - - Spectrum data, where the iFFT is evaluated in place. - - The data size per dimension. The first dimension is the major one. - For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. - - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. - - Sample data, organized row by row, where the iFFT is evaluated in place - The number of rows. - The number of columns. - Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. - - Sample matrix, where the iFFT is evaluated in place - Fourier Transform Convention Options. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive forward DFT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Fourier Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Naive inverse DFT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Fourier Transform Convention Options. - Corresponding time-space vector. - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 forward FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Radix-2 inverse FFT for power-of-two sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein forward FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Bluestein inverse FFT for arbitrary sized sample vectors. - - Sample vector, where the FFT is evaluated in place. - Fourier Transform Convention Options. - - - - Generate the frequencies corresponding to each index in frequency space. - The frequency space has a resolution of sampleRate/N. - Index 0 corresponds to the DC part, the following indices correspond to - the positive frequencies up to the Nyquist frequency (sampleRate/2), - followed by the negative frequencies wrapped around. - - Number of samples. - The sampling rate of the time-space data. - - - - Fourier Transform Convention - - - - - Inverse integrand exponent (forward: positive sign; inverse: negative sign). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling and common exponent (used in Maple). - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] - - - - - Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] - - - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). - - - - - Naive forward DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Hartley Transform Convention Options. - Corresponding frequency-space vector. - - - - Naive inverse DHT, useful e.g. to verify faster algorithms. - - Frequency-space sample vector. - Hartley Transform Convention Options. - Corresponding time-space vector. - - - - Rescale FFT-the resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Rescale the iFFT-resulting vector according to the provided convention options. - - Fourier Transform Convention Options. - Sample Vector. - - - - Naive generic DHT, useful e.g. to verify faster algorithms. - - Time-space sample vector. - Corresponding frequency-space vector. - - - - Hartley Transform Convention - - - - - Only scale by 1/N in the inverse direction; No scaling in forward direction. - - - - - Don't scale at all (neither on forward nor on inverse transformation). - - - - - Universal; Symmetric scaling. - - - - - Numerical Integration (Quadrature). - - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function on a closed interval. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping. - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth function to integrate. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Numerical Contour Integration of a complex-valued function over a real variable,. - - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts. - Where the interval stops. - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The expected relative accuracy of the approximation. - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - Approximation of the finite integral in the given interval. - - - - Analytic integration algorithm for smooth functions with no discontinuities - or derivative discontinuities and no poles inside the interval. - - - - - Maximum number of iterations, until the asked - maximum error is (likely to be) satisfied. - - - - - Approximate the integral by the double exponential transformation - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximate the integral by the double exponential transformation - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Compute the abscissa vector for a single level. - - The level to evaluate the abscissa vector for. - Abscissa Vector. - - - - Compute the weight vector for a single level. - - The level to evaluate the weight vector for. - Weight Vector. - - - - Precomputed abscissa vector per level. - - - - - Precomputed weight vector per level. - - - - - Getter for the order. - - - - - Getter that returns a clone of the array containing the Kronrod abscissas. - - - - - Getter that returns a clone of the array containing the Kronrod weights. - - - - - Getter that returns a clone of the array containing the Gauss weights. - - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth function to integrate - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) - - The analytic smooth complex function to integrate, defined on the real axis. - Where the interval starts - Where the interval stops - The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation - The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. - The maximum relative error in the result - The maximum number of interval splittings permitted before stopping - The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - - Initializes a new instance of the class. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - - - - Gettter for the ith abscissa. - - Index of the ith abscissa. - The ith abscissa. - - - - Getter that returns a clone of the array containing the abscissas. - - - - - Getter for the ith weight. - - Index of the ith weight. - The ith weight. - - - - Getter that returns a clone of the array containing the weights. - - - - - Getter for the order. - - - - - Getter for the InvervalBegin. - - - - - Getter for the InvervalEnd. - - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth function to integrate. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a definite integral using an Nth order Gauss-Legendre rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, exclusive and finite. - Where the interval ends, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. - - The 2-dimensional analytic smooth function to integrate. - Where the interval starts for the first (inside) integral, exclusive and finite. - Where the interval ends for the first (inside) integral, exclusive and finite. - Where the interval starts for the second (outside) integral, exclusive and finite. - /// Where the interval ends for the second (outside) integral, exclusive and finite. - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Approximation of the finite integral in the given interval. - - - - Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - Contains a method to compute the Gauss-Kronrod abscissas/weights. - - - - - Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. - - - - - Computes the Gauss-Kronrod abscissas/weights and Gauss weights. - - Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. - Object containing the non-negative abscissas/weights, order. - - - - Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. - - - - - Return value and derivative of a Legendre series at given points. - - - - - Return value and derivative of a Legendre polynomial of order at given points. - - - - - Creates a Gauss-Kronrod point. - - - - - Getter for the GaussKronrodPoint. - - Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, and order. - - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. - - - - - Computes the Gauss-Legendre abscissas/weights. - See Pavel Holoborodko for a description of the algorithm. - - Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. - Required precision to compute the abscissas/weights. 1e-10 is usually fine. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Creates and maps a Gauss-Legendre point. - - - - - Getter for the GaussPoint. - - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - - - - Getter for the GaussPoint. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. - - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. - Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - Contains the abscissas/weights, order, and intervalBegin/intervalEnd. - - - - - Contains two GaussPoint. - - - - - Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. - - - Wikipedia - Trapezium Rule - - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, defined on real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral in the provided interval by the trapezium rule. - - The analytic smooth complex function to integrate, define don real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - The expected accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Adaptive approximation of the definite integral by the trapezium rule. - - The analytic smooth complex function to integrate, defined on the real domain. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Abscissa vector per level provider. - Weight vector per level provider. - First Level Step - The expected relative accuracy of the approximation. - Approximation of the finite integral in the given interval. - - - - Approximation algorithm for definite integrals by Simpson's rule. - - - - - Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Approximation of the finite integral in the given interval. - - - - Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. - - The analytic smooth function to integrate. - Where the interval starts, inclusive and finite. - Where the interval stops, inclusive and finite. - Even number of composite subdivision partitions. - Approximation of the finite integral in the given interval. - - - - Interpolation Factory. - - - - - Creates an interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted - instead, which is more efficient. - - - - - Create a Bulirsch Stoer rational interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a barycentric polynomial interpolation where the given sample points are equidistant. - - The sample points t, must be equidistant. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted - instead, which is more efficient. - - - - - Create a Neville polynomial interpolation based on arbitrary points. - If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. - Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Create a piecewise linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted - instead, which is more efficient. - - - - - Create piecewise log-linear interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted - instead, which is more efficient. - - - - - Create an piecewise natural cubic spline interpolation based on arbitrary points, - with zero secondary derivatives at the boundaries. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted - instead, which is more efficient. - - - - - Create an piecewise cubic Akima spline interpolation based on arbitrary points. - Akima splines are robust to outliers. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted - instead, which is more efficient. - - - - - Create a piecewise cubic Hermite spline interpolation based on arbitrary points - and their slopes/first derivative. - - The sample points t. - The sample point values x(t). - The slope at the sample points. Optimized for arrays. - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted - instead, which is more efficient. - - - - - Create a step-interpolation based on arbitrary points. - - The sample points t. - The sample point values x(t). - - An interpolation scheme optimized for the given sample points and values, - which can then be used to compute interpolations and extrapolations - on arbitrary points. - - - if your data is already sorted in arrays, consider to use - MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted - instead, which is more efficient. - - - - - Barycentric Interpolation Algorithm. - - Supports neither differentiation nor integration. - - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - Barycentric weights (N), sorted ascendingly by x. - - - - Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. - - - - - Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. - - - - - Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - Order of the interpolation scheme, 0 <= order <= N. - In most cases a value between 3 and 8 gives good results. - - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - The values are assumed to be sorted ascendingly by x. - - Sample points (N), sorted ascendingly. - Sample values (N), sorted ascendingly by x. - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - WARNING: Works in-place and can thus causes the data array to be reordered. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. - - Sample points (N), no sorting assumed. - Sample values (N). - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. - - - - This algorithm supports neither differentiation nor integration. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Cubic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - third order spline coefficients (N) - - - - Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). - - - - - Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - Akima splines are robust to outliers. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. - Akima splines are robust to outliers. - - - - - Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, - and custom boundary/termination conditions. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. - - - - - Create a natural cubic spline interpolation from a set of (x,y) value pairs - and zero second derivatives at the two boundaries, sorted ascendingly by x. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs - and zero second derivatives at the two boundaries. - - - - - Three-Point Differentiation Helper. - - Sample Points t. - Sample Values x(t). - Index of the point of the differentiation. - Index of the first sample. - Index of the second sample. - Index of the third sample. - The derivative approximation. - - - - Tridiagonal Solve Helper. - - The a-vector[n]. - The b-vector[n], will be modified by this function. - The c-vector[n]. - The d-vector[n], will be modified by this function. - The x-vector[n] - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Interpolation within the range of a discrete set of known data points. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Piece-wise Linear Interpolation. - - Supports both differentiation and integration. - - - Sample points (N+1), sorted ascending - Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients - Slopes (N) at the sample points (first order coefficients): N - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Piece-wise Log-Linear Interpolation - - This algorithm supports differentiation, not integration. - - - - Internal Spline Interpolation - - - - Sample points (N), sorted ascending - Natural logarithm of the sample values (N) at the corresponding points - - - - Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Lagrange Polynomial Interpolation using Neville's Algorithm. - - - - This algorithm supports differentiation, but doesn't support integration. - - - When working with equidistant or Chebyshev sample points it is - recommended to use the barycentric algorithms specialized for - these cases instead of this arbitrary Neville algorithm. - - - - - Sample Points t, sorted ascendingly. - Sample Values x(t), sorted ascendingly by x. - - - - Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Quadratic Spline Interpolation. - - Supports both differentiation and integration. - - - sample points (N+1), sorted ascending - Zero order spline coefficients (N) - First order spline coefficients (N) - second order spline coefficients (N) - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t, - or the left index of the closest segment for extrapolation. - - - - - Left and right boundary conditions. - - - - - Natural Boundary (Zero second derivative). - - - - - Parabolically Terminated boundary. - - - - - Fixed first derivative at the boundary. - - - - - Fixed second derivative at the boundary. - - - - - A step function where the start of each segment is included, and the last segment is open-ended. - Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. - The domain of the function is all real numbers, such that y = 0 where x <. - - Supports both differentiation and integration. - - - Sample points (N), sorted ascending - Samples values (N) of each segment starting at the corresponding sample point. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. - - Point t to integrate at. - - - - Definite integral between points a and b. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - Find the index of the greatest sample point smaller than t. - - - - - Wraps an interpolation with a transformation of the interpolated values. - - Neither differentiation nor integration is supported. - - - - Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - WARNING: Works in-place and can thus causes the data array to be reordered and modified. - - - - - Create a linear spline interpolation from an unsorted set of (x,y) value pairs. - - - - - Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). - - - - - Gets a value indicating whether the algorithm supports integration (interpolated quadrature). - - - - - Interpolate at point t. - - Point t to interpolate at. - Interpolated value x(t). - - - - Differentiate at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated first derivative at point t. - - - - Differentiate twice at point t. NOT SUPPORTED. - - Point t to interpolate at. - Interpolated second derivative at point t. - - - - Indefinite integral at point t. NOT SUPPORTED. - - Point t to integrate at. - - - - Definite integral between points a and b. NOT SUPPORTED. - - Left bound of the integration interval [a,b]. - Right bound of the integration interval [a,b]. - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The divisor to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use, - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a double dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - Matrix V is encoded in the property EigenVectors in the way that: - - column corresponding to real eigenvalue represents real eigenvector, - - columns corresponding to the pair of complex conjugate eigenvalues - lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - double version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - double version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiply this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply this one by. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a float dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real dense vector to float-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Symmetric Householder reduction to tridiagonal form. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an orthogonal matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Double value z1 - Double value z2 - Result multiplication of signum function and absolute value - - - - Swap column and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - Scalar "c" value - Scalar "s" value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - float version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Evaluates whether this matrix is symmetric. - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a float sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. - - - A float sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a real vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - float version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the Frobenius norm of this matrix. - The Frobenius norm of this matrix. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The type of QR factorization to perform. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex value z1 - Complex value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). - - - - - Number of rows. - - Using this instead of the RowCount property to speed up calculating - a matrix index in the data array. - - - - Number of columns. - - Using this instead of the ColumnCount property to speed up calculating - a matrix index in the data array. - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Gets the matrix's data. - - The matrix's data. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of add - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A vector using dense storage. - - - - - Number of elements - - - - - Gets the vector's data. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new dense vector directly binding to a raw array. - The array is used directly without copying. - Very efficient, but changes to the array and the vector will affect each other. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Gets the vector's data. - - The vector's data. - - - - Returns a reference to the internal data structure. - - The DenseVector whose internal data we are - returning. - - A reference to the internal date of the given vector. - - - - - Returns a vector bound directly to a reference of the provided array. - - The array to bind to the DenseVector object. - - A DenseVector whose values are bound to the given array. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts another vector from this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Multiplies a vector with a complex. - - The vector to scale. - The Complex32 value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The Complex32 value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The Complex32 value. - The result of the division. - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. - - - A Complex32 dense vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex dense vector to double-precision dense vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - A matrix type for diagonal matrices. - - - Diagonal matrices can be non-square matrices but the diagonal always starts - at element 0,0. A diagonal matrix will throw an exception if non diagonal - entries are set. The exception to this is when the off diagonal elements are - 0.0 or NaN; these settings will cause no change to the diagonal matrix. - - - - - Gets the matrix's data. - - The matrix's data. - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns. - All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to contain the diagonal elements only and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - The matrix to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - The array to copy from must be diagonal as well. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Create a new diagonal matrix with diagonal values sampled from the provided random distribution. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the division. - - - - Computes the determinant of this matrix. - - The determinant of this matrix. - - - - Returns the elements of the diagonal in a . - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - Calculates the condition number of this matrix. - The condition number of the matrix. - - - Computes the inverse of this matrix. - If is not a square matrix. - If is singular. - The inverse of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - Always thrown - Permutation in diagonal matrix are senseless, because of matrix nature - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for dense matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Factorize matrix using the modified Gram-Schmidt method. - - Initial matrix. On exit is replaced by Q. - Number of rows in Q. - Number of columns in Q. - On exit is filled by R. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Gets or sets Tau vector. Contains additional information on Q - used for native solver. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - If row count is less then column count - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - If SVD algorithm failed to converge with matrix . - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - A class which encapsulates the functionality of a Cholesky factorization for user matrices. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - - - - Computes the Cholesky factorization in-place. - - On entry, the matrix to factor. On exit, the Cholesky factor matrix - If is null. - If is not a square matrix. - If is not positive definite. - - - - Initializes a new instance of the class. This object will compute the - Cholesky factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - If is not positive definite. - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - If is null. - If is not a square matrix. - If is not positive definite. - If does not have the same dimensions as the existing factor. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a complex matrix. - - - If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is Hermitian. - I.e. A = V*D*V' and V*VH=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - - - - Initializes a new instance of the class. This object will compute the - the eigenvalue decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - If it is known whether the matrix is symmetric or not the routine can skip checking it itself. - If is null. - If EVD algorithm failed to converge with matrix . - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - The eigen vectors to work on. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - The eigen vectors to work on. - Previously tridiagonalized matrix by . - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - The eigen vectors to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - The eigen vectors to work on. - The eigen values to work on. - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - - - - Initializes a new instance of the class. This object creates an unitary matrix - using the modified Gram-Schmidt method. - - The matrix to factor. - If is null. - If row count is less then column count - If is rank deficient - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - - - The computation of the LU factorization is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - LU factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - If is null. - If is not a square matrix. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - - - - - Initializes a new instance of the class. This object will compute the - QR factorization when the constructor is called and cache it's factorization. - - The matrix to factor. - The QR factorization method to use. - If is null. - - - - Generate column from initial matrix to work array - - Initial matrix - The first row - Column index - Generated vector - - - - Perform calculation of Q or R - - Work array - Q or R matrices - The first row - The last row - The first column - The last column - Number of available CPUs - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD) for . - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - - - - Initializes a new instance of the class. This object will compute the - the singular value decomposition when the constructor is called and cache it's decomposition. - - The matrix to factor. - Compute the singular U and VT vectors or not. - If is null. - - - - - Calculates absolute value of multiplied on signum function of - - Complex32 value z1 - Complex32 value z2 - Result multiplication of signum function and absolute value - - - - Interchanges two vectors and - - Source matrix - The number of rows in - Column A index to swap - Column B index to swap - - - - Scale column by starting from row - - Source matrix - The number of rows in - Column to scale - Row to scale from - Scale value - - - - Scale vector by starting from index - - Source vector - Row to scale from - Scale value - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Calculate Norm 2 of the column in matrix starting from row - - Source matrix - The number of rows in - Column index - Start row index - Norm2 (Euclidean norm) of the column - - - - Calculate Norm 2 of the vector starting from index - - Source vector - Start index - Norm2 (Euclidean norm) of the vector - - - - Calculate dot product of and conjugating the first vector. - - Source matrix - The number of rows in - Index of column A - Index of column B - Starting row index - Dot product value - - - - Performs rotation of points in the plane. Given two vectors x and y , - each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) - - Source matrix - The number of rows in - Index of column A - Index of column B - scalar cos value - scalar sin value - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Matrix class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar to divide by each element of the matrix. - The matrix to store the result of the division. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - A Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' - of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the - BiCGStab can be used on non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The Bi-CGSTAB algorithm was taken from:
- Templates for the solution of linear systems: Building blocks - for iterative methods -
- Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, - June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, - Charles Romine and Henk van der Vorst -
- Url: http://www.netlib.org/templates/Templates.html -
- Algorithm is described in Chapter 2, section 2.3.8, page 27 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient , A. - The solution , b. - The result , x. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A composite matrix solver. The actual solver is made by a sequence of - matrix solvers. - - - - Solver based on:
- Faster PDE-based simulations using robust composite linear solvers
- S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
- Future Generation Computer Systems, Vol 20, 2004, pp 373�387
-
- - Note that if an iterator is passed to this solver it will be used for all the sub-solvers. - -
-
- - - The collection of solvers that will be used - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A diagonal preconditioner. The preconditioner uses the inverse - of the matrix diagonal as preconditioning values. - - - - - The inverse of the matrix diagonal. - - - - - Returns the decomposed matrix diagonal. - - The matrix diagonal. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - A Generalized Product Bi-Conjugate Gradient iterative matrix solver. - - - - The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an - alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. - Unlike the CG solver the GPBiCG solver can be used on - non-symmetric matrices.
- Note that much of the success of the solver depends on the selection of the - proper preconditioner. -
- - The GPBiCG algorithm was taken from:
- GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with - efficiency and robustness -
- S. Fujino -
- Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 -
-
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Indicates the number of BiCGStab steps should be taken - before switching. - - - - - Indicates the number of GPBiCG steps should be taken - before switching. - - - - - Gets or sets the number of steps taken with the BiCgStab algorithm - before switching over to the GPBiCG algorithm. - - - - - Gets or sets the number of steps taken with the GPBiCG algorithm - before switching over to the BiCgStab algorithm. - - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Decide if to do steps with BiCgStab - - Number of iteration - true if yes, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - An incomplete, level 0, LU factorization preconditioner. - - - The ILU(0) algorithm was taken from:
- Iterative methods for sparse linear systems
- Yousef Saad
- Algorithm is described in Chapter 10, section 10.3.2, page 275
-
-
- - - The matrix holding the lower (L) and upper (U) matrices. The - decomposition matrices are combined to reduce storage. - - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - A new matrix containing the lower triagonal elements. - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - This class performs an Incomplete LU factorization with drop tolerance - and partial pivoting. The drop tolerance indicates which additional entries - will be dropped from the factorized LU matrices. - - - The ILUTP-Mem algorithm was taken from:
- ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner -
- Tzu-Yi Chen, Department of Mathematics and Computer Science,
- Pomona College, Claremont CA 91711, USA
- Published in:
- Lecture Notes in Computer Science
- Volume 3046 / 2004
- pp. 20 - 28
- Algorithm is described in Section 2, page 22 -
-
- - - The default fill level. - - - - - The default drop tolerance. - - - - - The decomposed upper triangular matrix. - - - - - The decomposed lower triangular matrix. - - - - - The array containing the pivot values. - - - - - The fill level. - - - - - The drop tolerance. - - - - - The pivot tolerance. - - - - - Initializes a new instance of the class with the default settings. - - - - - Initializes a new instance of the class with the specified settings. - - - The amount of fill that is allowed in the matrix. The value is a fraction of - the number of non-zero entries in the original matrix. Values should be positive. - - - The absolute drop tolerance which indicates below what absolute value an entry - will be dropped from the matrix. A drop tolerance of 0.0 means that no values - will be dropped. Values should always be positive. - - - The pivot tolerance which indicates at what level pivoting will take place. A - value of 0.0 means that no pivoting will take place. - - - - - Gets or sets the amount of fill that is allowed in the matrix. The - value is a fraction of the number of non-zero entries in the original - matrix. The standard value is 200. - - - - Values should always be positive and can be higher than 1.0. A value lower - than 1.0 means that the eventual preconditioner matrix will have fewer - non-zero entries as the original matrix. A value higher than 1.0 means that - the eventual preconditioner can have more non-zero values than the original - matrix. - - - Note that any changes to the FillLevel after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the absolute drop tolerance which indicates below what absolute value - an entry will be dropped from the matrix. The standard value is 0.0001. - - - - The values should always be positive and can be larger than 1.0. A low value will - keep more small numbers in the preconditioner matrix. A high value will remove - more small numbers from the preconditioner matrix. - - - Note that any changes to the DropTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Gets or sets the pivot tolerance which indicates at what level pivoting will - take place. The standard value is 0.0 which means pivoting will never take place. - - - - The pivot tolerance is used to calculate if pivoting is necessary. Pivoting - will take place if any of the values in a row is bigger than the - diagonal value of that row divided by the pivot tolerance, i.e. pivoting - will take place if row(i,j) > row(i,i) / PivotTolerance for - any j that is not equal to i. - - - Note that any changes to the PivotTolerance after creating the preconditioner - will invalidate the created preconditioner and will require a re-initialization of - the preconditioner. - - - Thrown if a negative value is provided. - - - - Returns the upper triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the upper triagonal elements. - - - - Returns the lower triagonal matrix that was created during the LU decomposition. - - - This method is used for debugging purposes only and should normally not be used. - - A new matrix containing the lower triagonal elements. - - - - Returns the pivot array. This array is not needed for normal use because - the preconditioner will return the solution vector values in the proper order. - - - This method is used for debugging purposes only and should normally not be used. - - The pivot array. - - - - Initializes the preconditioner and loads the internal data structures. - - - The upon which this preconditioner is based. Note that the - method takes a general matrix type. However internally the data is stored - as a sparse matrix. Therefore it is not recommended to pass a dense matrix. - - If is . - If is not a square matrix. - - - - Pivot elements in the according to internal pivot array - - Row to pivot in - - - - Was pivoting already performed - - Pivots already done - Current item to pivot - true if performed, otherwise false - - - - Swap columns in the - - Source . - First column index to swap - Second column index to swap - - - - Sort vector descending, not changing vector but placing sorted indices to - - Start sort form - Sort till upper bound - Array with sorted vector indices - Source - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Pivot elements in according to internal pivot array - - Source . - Result after pivoting. - - - - An element sort algorithm for the class. - - - This sort algorithm is used to sort the columns in a sparse matrix based on - the value of the element on the diagonal of the matrix. - - - - - Sorts the elements of the vector in decreasing - fashion. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Sorts the elements of the vector in decreasing - fashion using heap sort algorithm. The vector itself is not affected. - - The starting index. - The stopping index. - An array that will contain the sorted indices once the algorithm finishes. - The that contains the values that need to be sorted. - - - - Build heap for double indices - - Root position - Length of - Indices of - Target - - - - Sift double indices - - Indices of - Target - Root position - Length of - - - - Sorts the given integers in a decreasing fashion. - - The values. - - - - Sort the given integers in a decreasing fashion using heapsort algorithm - - Array of values to sort - Length of - - - - Build heap - - Target values array - Root position - Length of - - - - Sift values - - Target value array - Root position - Length of - - - - Exchange values in array - - Target values array - First value to exchange - Second value to exchange - - - - A simple milu(0) preconditioner. - - - Original Fortran code by Yousef Saad (07 January 2004) - - - - Use modified or standard ILU(0) - - - - Gets or sets a value indicating whether to use modified or standard ILU(0). - - - - - Gets a value indicating whether the preconditioner is initialized. - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix upon which the preconditioner is based. - If is . - If is not a square or is not an - instance of SparseCompressedRowMatrixStorage. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector b. - The left hand side vector x. - - - - MILU0 is a simple milu(0) preconditioner. - - Order of the matrix. - Matrix values in CSR format (input). - Column indices (input). - Row pointers (input). - Matrix values in MSR format (output). - Row pointers and column indices (output). - Pointer to diagonal elements (output). - True if the modified/MILU algorithm should be used (recommended) - Returns 0 on success or k > 0 if a zero pivot was encountered at step k. - - - - A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. - - - - The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' - of the standard BiCgStab solver. - - - The algorithm was taken from:
- ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors -
- Man-Chung Yeung and Tony F. Chan -
- SIAM Journal of Scientific Computing -
- Volume 21, Number 4, pp. 1263 - 1290 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - The default number of starting vectors. - - - - - The collection of starting vectors which are used as the basis for the Krylov sub-space. - - - - - The number of starting vectors used by the algorithm - - - - - Gets or sets the number of starting vectors. - - - Must be larger than 1 and smaller than the number of variables in the matrix that - for which this solver will be used. - - - - - Resets the number of starting vectors to the default value. - - - - - Gets or sets a series of orthonormal vectors which will be used as basis for the - Krylov sub-space. - - - - - Gets the number of starting vectors to create - - Maximum number - Number of variables - Number of starting vectors to create - - - - Returns an array of starting vectors. - - The maximum number of starting vectors that should be created. - The number of variables. - - An array with starting vectors. The array will never be larger than the - but it may be smaller if - the is smaller than - the . - - - - - Create random vectors array - - Number of vectors - Size of each vector - Array of random vectors - - - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Source A. - Residual data. - x data. - b data. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. - - - - The TFQMR algorithm was taken from:
- Iterative methods for sparse linear systems. -
- Yousef Saad -
- Algorithm is described in Chapter 7, section 7.4.3, page 219 -
- - The example code below provides an indication of the possible use of the - solver. - -
-
- - - Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax - - Instance of the A. - Residual values in . - Instance of the x. - Instance of the b. - - - - Is even? - - Number to check - true if even, otherwise false - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. - The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. - Wikipedia - CSR. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new square sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the order is less than one. - - - - Create a new sparse matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - If the row or column count is less than one. - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new square sparse identity matrix where each diagonal value is set to One. - - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract to this matrix. - The matrix to store the result of subtraction. - If the other matrix is . - If the two matrices don't have the same dimensions. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The matrix to pointwise divide this one by. - The matrix to store the result of the pointwise division. - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - A vector with sparse storage, intended for very large vectors where most of the cells are zero. - - The sparse vector is not thread safe. - - - - Gets the number of non zero elements in the vector. - - The number of non zero elements. - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new sparse vector with the given length. - All cells of the vector will be initialized to zero. - Zero-length vectors are not supported. - - If length is less than one. - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled - sparse vector and very inefficient. Would be better to work with a dense vector instead. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Negates vector and saves result to - - Target vector - - - - Conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Multiplies a vector with a complex. - - The vector to scale. - The complex value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a complex. - - The complex value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a vector with a complex. - - The vector to divide. - The complex value. - The result of the division. - If is . - - - - Computes the modulus of each element of the vector of the given divisor. - - The vector whose elements we want to compute the modulus of. - The divisor to use, - The result of the calculation - If is . - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = ( ∑|this[i]|^p )^(1/p) - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Creates a double sparse vector based on a string. The string can be in the following formats (without the - quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. - - - A double sparse vector containing the values specified by the given string. - - - the string to parse. - - - An that supplies culture-specific formatting information. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. - A return value indicates whether the conversion succeeded or failed. - - - A string containing a complex vector to convert. - - - An that supplies culture-specific formatting information about value. - - - The parsed value. - - - If the conversion succeeds, the result will contain a complex number equivalent to value. - Otherwise the result will be null. - - - - - Complex32 version of the class. - - - - - Initializes a new instance of the Vector class. - - - - - Set all values whose absolute value is smaller than the threshold to zero. - - - - - Conjugates vector and save result to - - Target vector - - - - Negates vector and saves result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to add. - - - The vector to store the result of the addition. - - - - - Adds another vector to this vector and stores the result into the result vector. - - - The vector to add to this one. - - - The vector to store the result of the addition. - - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - - The scalar to subtract. - - - The vector to store the result of the subtraction. - - - - - Subtracts another vector to this vector and stores the result into the result vector. - - - The vector to subtract from this one. - - - The vector to store the result of the subtraction. - - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - - The scalar to multiply. - - - The vector to store the result of the multiplication. - - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - - The scalar to divide with. - - - The vector to store the result of the division. - - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The vector to pointwise divide this one by. - The vector to store the result of the pointwise division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - - The p value. - - - Scalar ret = ( ∑|At(i)|^p )^(1/p) - - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - - The p value. - - - This vector normalized to a unit vector with respect to the p-norm. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Generic linear algebra type builder, for situations where a matrix or vector - must be created in a generic way. Usage of generic builders should not be - required in normal user code. - - - - - Gets the value of 0.0 for type T. - - - - - Gets the value of 1.0 for type T. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new matrix straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with the same kind and dimensions of the provided example. - - - - - Create a new matrix with the same kind of the provided example. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples. - - - - - Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new dense matrix with values sampled from the provided random distribution. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new dense matrix with values sampled from the standard distribution with a system random source. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the standard distribution. - - - - - Create a new positive definite dense matrix where each value is the product - of two samples from the provided random distribution. - - - - - Create a new dense matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new dense matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new dense matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to be in column-major order (column by column) and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - - Create a new dense matrix and initialize each value to the same provided value. - - - - - Create a new dense matrix and initialize each value using the provided init function. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new dense matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable. - The enumerable is assumed to be in column-major order (column by column). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix of T as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new dense matrix from a 2D array of existing matrices. - The matrices in the array are not required to be dense already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new sparse matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse matrix of T with the given number of rows and columns. - - The number of rows. - The number of columns. - - - - Create a new sparse matrix and initialize each value to the same provided value. - - - - - Create a new sparse matrix and initialize each value using the provided init function. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new diagonal dense identity matrix with a one-diagonal. - - - - - Create a new sparse matrix as a copy of the given other matrix. - This new matrix will be independent from the other matrix. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given two-dimensional array. - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable. - The enumerable is assumed to be in row-major order (row by row). - This new matrix will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - - Create a new sparse matrix with the given number of rows and columns as a copy of the given array. - The array is assumed to be in column-major order (column by column). - This new matrix will be independent from the provided array. - A new memory block will be allocated for storing the matrix. - - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable columns. - Each enumerable in the master enumerable specifies a column. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given column vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given enumerable of enumerable rows. - Each enumerable in the master enumerable specifies a row. - This new matrix will be independent from the enumerables. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row arrays. - This new matrix will be independent from the arrays. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix as a copy of the given row vectors. - This new matrix will be independent from the vectors. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new sparse matrix from a 2D array of existing matrices. - The matrices in the array are not required to be sparse already. - If the matrices do not align properly, they are placed on the top left - corner of their cell with the remaining fields left zero. - - - - - Create a new diagonal matrix straight from an initialized matrix storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a new diagonal matrix with the given number of rows and columns. - All cells of the matrix will be initialized to zero. - Zero-length matrices are not supported. - - - - - Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new square diagonal matrix directly binding to a raw array. - The array is assumed to represent the diagonal values and is used directly without copying. - Very efficient, but changes to the array and the matrix will affect each other. - - - - - Create a new diagonal matrix and initialize each diagonal value to the same provided value. - - - - - Create a new diagonal matrix and initialize each diagonal value using the provided init function. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal identity matrix with a one-diagonal. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given vector. - This new matrix will be independent from the vector. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new diagonal matrix with the diagonal as a copy of the given array. - This new matrix will be independent from the array. - A new memory block will be allocated for storing the matrix. - - - - - Create a new vector straight from an initialized matrix storage instance. - If you have an instance of a discrete storage type instead, use their direct methods instead. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with the same kind and dimension of the provided example. - - - - - Create a new vector with the same kind of the provided example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. - - - - - Create a new vector with a type that can represent and is closest to both provided samples. - - - - - Create a new dense vector with values sampled from the provided random distribution. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector with values sampled from the standard distribution with a system random source. - - - - - Create a new dense vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a dense vector of T with the given size. - - The size of the vector. - - - - Create a dense vector of T that is directly bound to the specified array. - - - - - Create a new dense vector and initialize each value using the provided value. - - - - - Create a new dense vector and initialize each value using the provided init function. - - - - - Create a new dense vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new dense vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector straight from an initialized vector storage instance. - The storage is used directly without copying. - Intended for advanced scenarios where you're working directly with - storage for performance or interop reasons. - - - - - Create a sparse vector of T with the given size. - - The size of the vector. - - - - Create a new sparse vector and initialize each value using the provided value. - - - - - Create a new sparse vector and initialize each value using the provided init function. - - - - - Create a new sparse vector as a copy of the given other vector. - This new vector will be independent from the other vector. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given array. - This new vector will be independent from the array. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given enumerable. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - Create a new sparse vector as a copy of the given indexed enumerable. - Keys must be provided at most once, zero is assumed if a key is omitted. - This new vector will be independent from the enumerable. - A new memory block will be allocated for storing the vector. - - - - - A class which encapsulates the functionality of a Cholesky factorization. - For a symmetric, positive definite matrix A, the Cholesky factorization - is an lower triangular matrix L so that A = L*L'. - - - The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric - or positive definite, the constructor will throw an exception. - - Supported data types are double, single, , and . - - - - Gets the lower triangular form of the Cholesky matrix. - - - - - Gets the determinant of the matrix for which the Cholesky matrix was computed. - - - - - Gets the log determinant of the matrix for which the Cholesky matrix was computed. - - - - - Calculates the Cholesky factorization of the input matrix. - - The matrix to be factorized. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A Cholesky factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A Cholesky factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Eigenvalues and eigenvectors of a real matrix. - - - If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is - diagonal and the eigenvector matrix V is orthogonal. - I.e. A = V*D*V' and V*VT=I. - If A is not symmetric, then the eigenvalue matrix D is block diagonal - with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, - lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The - columns of V represent the eigenvectors in the sense that A*V = V*D, - i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly - conditioned, or even singular, so the validity of the equation - A = V*D*Inverse(V) depends upon V.Condition(). - - Supported data types are double, single, , and . - - - - Gets or sets a value indicating whether matrix is symmetric or not - - - - - Gets the absolute value of determinant of the square matrix for which the EVD was computed. - - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Gets or sets the eigen values (λ) of matrix in ascending value. - - - - - Gets or sets eigenvectors. - - - - - Gets or sets the block diagonal eigenvalue matrix. - - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A EVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A EVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. - Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. - - - The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. - - Supported data types are double, single, , and . - - - - Classes that solves a system of linear equations, AX = B. - - Supported data types are double, single, , and . - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, AX = B. - - The right hand side Matrix, B. - The left hand side Matrix, X. - - - - Solves a system of linear equations, Ax = b - - The right hand side vector, b. - The left hand side Vector, x. - - - - Solves a system of linear equations, Ax = b. - - The right hand side vector, b. - The left hand side Matrix>, x. - - - - A class which encapsulates the functionality of an LU factorization. - For a matrix A, the LU factorization is a pair of lower triangular matrix L and - upper triangular matrix U so that A = L*U. - In the Math.Net implementation we also store a set of pivot elements for increased - numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. - - - The computation of the LU factorization is done at construction time. - - Supported data types are double, single, , and . - - - - Gets the lower triangular factor. - - - - - Gets the upper triangular factor. - - - - - Gets the permutation applied to LU factorization. - - - - - Gets the determinant of the matrix for which the LU factorization was computed. - - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A LU factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A LU factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Returns the inverse of this matrix. The inverse is calculated using LU decomposition. - - The inverse of this matrix. - - - - The type of QR factorization go perform. - - - - - Compute the full QR factorization of a matrix. - - - - - Compute the thin QR factorization of a matrix. - - - - - A class which encapsulates the functionality of the QR decomposition. - Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix - (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix - (also called right triangular matrix). - - - The computation of the QR decomposition is done at construction time by Householder transformation. - If a factorization is performed, the resulting Q matrix is an m x m matrix - and the R matrix is an m x n matrix. If a factorization is performed, the - resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. - - Supported data types are double, single, , and . - - - - Gets or sets orthogonal Q matrix - - - - - Gets the upper triangular factor R. - - - - - Gets the absolute determinant value of the matrix for which the QR matrix was computed. - - - - - Gets a value indicating whether the matrix is full rank or not. - - true if the matrix is full rank; otherwise false. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - A class which encapsulates the functionality of the singular value decomposition (SVD). - Suppose M is an m-by-n matrix whose entries are real numbers. - Then there exists a factorization of the form M = UΣVT where: - - U is an m-by-m unitary matrix; - - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; - - VT denotes transpose of V, an n-by-n unitary matrix; - Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal - entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined - by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. - - - The computation of the singular value decomposition is done at construction time. - - Supported data types are double, single, , and . - - - Indicating whether U and VT matrices have been computed during SVD factorization. - - - - Gets the singular values (Σ) of matrix in ascending value. - - - - - Gets the left singular vectors (U - m-by-m unitary matrix) - - - - - Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) - - - - - Returns the singular values as a diagonal . - - The singular values as a diagonal . - - - - Gets the effective numerical matrix rank. - - The number of non-negligible singular values. - - - - Gets the two norm of the . - - The 2-norm of the . - - - - Gets the condition number max(S) / min(S) - - The condition number. - - - - Gets the determinant of the square matrix for which the SVD was computed. - - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A SVD factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, Ax = b, with A SVD factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - Supported data types are double, single, , and . - - Defines the base class for Matrix classes. - - - Defines the base class for Matrix classes. - - - - - The value of 1.0. - - - - - The value of 0.0. - - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - - - - Complex conjugates each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - - - - Add a scalar to each element of the matrix and stores the result in the result vector. - - The scalar to add. - The matrix to store the result of the addition. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar denominator to use. - The matrix to store the result of the division. - - - - Divides a scalar by each element of the matrix and stores the result in the result matrix. - - The scalar numerator to use. - The matrix to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given divisor each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the matrix. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. - - The exponent matrix to raise this matrix values to. - The matrix to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix with another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. - - The matrix to store the result. - - - - Adds a scalar to each element of the matrix. - - The scalar to add. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds a scalar to each element of the matrix and stores the result in the result matrix. - - The scalar to add. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The result of the addition. - If the two matrices don't have the same dimensions. - - - - Adds another matrix to this matrix. - - The matrix to add to this matrix. - The matrix to store the result of the addition. - If the two matrices don't have the same dimensions. - - - - Subtracts a scalar from each element of the matrix. - - The scalar to subtract. - A new matrix containing the subtraction of this matrix and the scalar. - - - - Subtracts a scalar from each element of the matrix and stores the result in the result matrix. - - The scalar to subtract. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts each element of the matrix from a scalar. - - The scalar to subtract from. - A new matrix containing the subtraction of the scalar and this matrix. - - - - Subtracts each element of the matrix from a scalar and stores the result in the result matrix. - - The scalar to subtract from. - The matrix to store the result of the subtraction. - If this matrix and are not the same size. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Subtracts another matrix from this matrix. - - The matrix to subtract. - The matrix to store the result of the subtraction. - If the two matrices don't have the same dimensions. - - - - Multiplies each element of this matrix with a scalar. - - The scalar to multiply with. - The result of the multiplication. - - - - Multiplies each element of the matrix by a scalar and places results into the result matrix. - - The scalar to multiply the matrix with. - The matrix to store the result of the multiplication. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides each element of this matrix with a scalar. - - The scalar to divide with. - The result of the division. - - - - Divides each element of the matrix by a scalar and places results into the result matrix. - - The scalar to divide the matrix with. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Divides a scalar by each element of the matrix. - - The scalar to divide. - The result of the division. - - - - Divides a scalar by each element of the matrix and places results into the result matrix. - - The scalar to divide. - The matrix to store the result of the division. - If the result matrix's dimensions are not the same as this matrix. - - - - Multiplies this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.ColumnCount != rightSide.Count. - - - - Multiplies this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.RowCount. - If this.ColumnCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ). - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. - - The vector to multiply with. - The result of the multiplication. - - - - Multiplies this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.Rows. - If the result matrix's dimensions are not the this.Rows x other.Columns. - - - - Multiplies this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.Rows. - The result of the multiplication. - - - - Multiplies this matrix with transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Columns != other.ColumnCount. - If the result matrix's dimensions are not the this.RowCount x other.RowCount. - - - - Multiplies this matrix with the conjugate transpose of another matrix and returns the result. - - The matrix to multiply with. - If this.Columns != other.ColumnCount. - The result of the multiplication. - - - - Multiplies the conjugate transpose of this matrix by a vector and returns the result. - - The vector to multiply with. - The result of the multiplication. - If this.RowCount != rightSide.Count. - - - - Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. - - The vector to multiply with. - The result of the multiplication. - If result.Count != this.ColumnCount. - If this.RowCount != .Count. - - - - Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. - - The matrix to multiply with. - The result of the multiplication. - If this.Rows != other.RowCount. - If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. - - - - Multiplies the conjugate transpose of this matrix with another matrix and returns the result. - - The matrix to multiply with. - If this.Rows != other.RowCount. - The result of the multiplication. - - - - Raises this square matrix to a positive integer exponent and places the results into the result matrix. - - The positive integer exponent to raise the matrix to. - The result of the power. - - - - Multiplies this square matrix with another matrix and returns the result. - - The positive integer exponent to raise the matrix to. - - - - Negate each element of this matrix. - - A matrix containing the negated values. - - - - Negate each element of this matrix and place the results into the result matrix. - - The result of the negation. - if the result matrix's dimensions are not the same as this matrix. - - - - Complex conjugate each element of this matrix. - - A matrix containing the conjugated values. - - - - Complex conjugate each element of this matrix and place the results into the result matrix. - - The result of the conjugation. - if the result matrix's dimensions are not the same as this matrix. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - A matrix containing the results. - - - - Computes the remainder (matrix % divisor), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar denominator to use. - Matrix to store the results in. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - A matrix containing the results. - - - - Computes the remainder (dividend % matrix), where the result has the sign of the dividend, - for each element of the matrix. - - The scalar numerator to use. - Matrix to store the results in. - - - - Pointwise multiplies this matrix with another matrix. - - The matrix to pointwise multiply with this one. - If this matrix and are not the same size. - A new matrix that is the pointwise multiplication of this matrix and . - - - - Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. - - The matrix to pointwise multiply with this one. - The matrix to store the result of the pointwise multiplication. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise divide this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - A new matrix that is the pointwise division of this matrix and . - - - - Pointwise divide this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise division. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise raise this matrix to an exponent and store the result into the result matrix. - - The exponent to raise this matrix values to. - - - - Pointwise raise this matrix to an exponent. - - The exponent to raise this matrix values to. - The matrix to store the result into. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise modulus. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix. - - The pointwise denominator matrix to use. - If this matrix and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this matrix by another matrix and stores the result into the result matrix. - - The pointwise denominator matrix to use. - The matrix to store the result of the pointwise remainder. - If this matrix and are not the same size. - If this matrix and are not the same size. - - - - Helper function to apply a unary function to a matrix. The function - f modifies the matrix given to it in place. Before its - called, a copy of the 'this' matrix is first created, then passed to - f. The copy is then returned as the result - - Function which takes a matrix, modifies it in place and returns void - New instance of matrix which is the result - - - - Helper function to apply a unary function which modifies a matrix - in place. - - Function which takes a matrix, modifies it in place and returns void - The matrix to be passed to f and where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two matrices - and modifies the latter in place. A copy of the "this" matrix is - first made and then passed to f together with the other matrix. The - copy is then returned as the result - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The resulting matrix - If this matrix and are not the same dimension. - - - - Helper function to apply a binary function which takes two matrices - and modifies the second one in place - - Function which takes two matrices, modifies the second in place and returns void - The other matrix to be passed to the function as argument. It is not modified - The matrix to store the result. - The resulting matrix - If this matrix and are not the same dimension. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - - - - - - Pointwise applies the atan2 function to each value of the current - matrix and a given other matrix being the 'x' of atan2 and the - 'this' matrix being the 'y' - - The other matrix 'y' - The matrix with the result and 'x' - - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the trace of this matrix. - - The trace of this matrix - If the matrix is not square - - - - Calculates the rank of the matrix. - - effective numerical rank, obtained from SVD - - - - Calculates the nullity of the matrix. - - effective numerical nullity, obtained from SVD - - - Calculates the condition number of this matrix. - The condition number of the matrix. - The condition number is calculated using singular value decomposition. - - - Computes the determinant of this matrix. - The determinant of this matrix. - - - - Computes an orthonormal basis for the null space of this matrix, - also known as the kernel of the corresponding matrix transformation. - - - - - Computes an orthonormal basis for the column space of this matrix, - also known as the range or image of the corresponding matrix transformation. - - - - Computes the inverse of this matrix. - The inverse of this matrix. - - - Computes the Moore-Penrose Pseudo-Inverse of this matrix. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - - - - Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N - with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. - - The other matrix. - The Kronecker product of the two matrices. - If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another matrix to each value. - - The matrix with the values to compare to. - The matrix to store the result. - If this matrix and are not the same size. - - - Calculates the induced L1 norm of this matrix. - The maximum absolute column sum of the matrix. - - - Calculates the induced L2 norm of the matrix. - The largest singular value of the matrix. - - For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. - In a later release, it will be replaced with a sparse implementation. - - - - Calculates the induced infinity norm of this matrix. - The maximum absolute row sum of the matrix. - - - Calculates the entry-wise Frobenius norm of this matrix. - The square root of the sum of the squared values. - - - - Calculates the p-norms of all row vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the p-norms of all column vectors. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all row vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Normalizes all column vectors to a unit p-norm. - Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) - - - - - Calculates the value sum of each row vector. - - - - - Calculates the value sum of each column vector. - - - - - Calculates the absolute value sum of each row vector. - - - - - Calculates the absolute value sum of each column vector. - - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns a string that describes the type, dimensions and shape of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string 2D array that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes the content of this matrix. - - - - - Returns a string that summarizes this matrix. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this matrix. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Matrix class. - - - - - Gets the raw matrix data storage. - - - - - Gets the number of columns. - - The number of columns. - - - - Gets the number of rows. - - The number of rows. - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - - - - Sets the value of the given element without range checking. - - - The row of the element. - - - The column of the element. - - - The value to set the element to. - - - - - Sets all values to zero. - - - - - Sets all values of a row to zero. - - - - - Sets all values of a column to zero. - - - - - Sets all values for all of the chosen rows to zero. - - - - - Sets all values for all of the chosen columns to zero. - - - - - Sets all values of a sub-matrix to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Creates a clone of this instance. - - - A clone of the instance. - - - - - Copies the elements of this matrix to the given matrix. - - - The matrix to copy values into. - - - If target is . - - - If this and the target matrix do not have the same dimensions.. - - - - - Copies a row into an Vector. - - The row to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of rows. - - - - Copies a row into to the given Vector. - - The row to copy. - The Vector to copy the row into. - If the result vector is . - If is negative, - or greater than or equal to the number of rows. - If this.Columns != result.Count. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of rows. - is negative, - or greater than or equal to the number of columns. - (columnIndex + length) >= Columns. - If is not positive. - - - - Copies the requested row elements into a new Vector. - - The row to copy elements from. - The column to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Copies a column into a new Vector>. - - The column to copy. - A Vector containing the copied elements. - If is negative, - or greater than or equal to the number of columns. - - - - Copies a column into to the given Vector. - - The column to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If this.Rows != result.Count. - - - - Copies the requested column elements into a new Vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - A Vector containing the requested elements. - If: - is negative, - or greater than or equal to the number of columns. - is negative, - or greater than or equal to the number of rows. - (rowIndex + length) >= Rows. - - If is not positive. - - - - Copies the requested column elements into the given vector. - - The column to copy elements from. - The row to start copying from. - The number of elements to copy. - The Vector to copy the column into. - If the result Vector is . - If is negative, - or greater than or equal to the number of columns. - If is negative, - or greater than or equal to the number of rows. - If + - is greater than or equal to the number of rows. - If is not positive. - If result.Count < length. - - - - Returns a new matrix containing the upper triangle of this matrix. - - The upper triangle of this matrix. - - - - Returns a new matrix containing the lower triangle of this matrix. - - The lower triangle of this matrix. - - - - Puts the lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Puts the upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a matrix that contains the values from the requested sub-matrix. - - The row to start copying from. - The number of rows to copy. Must be positive. - The column to start copying from. - The number of columns to copy. Must be positive. - The requested sub-matrix. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - If or - is not positive. - - - - Returns the elements of the diagonal in a Vector. - - The elements of the diagonal. - For non-square matrices, the method returns Min(Rows, Columns) elements where - i == j (i is the row index, and j is the column index). - - - - Returns a new matrix containing the lower triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The lower triangle of this matrix. - - - - Puts the strictly lower triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Returns a new matrix containing the upper triangle of this matrix. The new matrix - does not contain the diagonal elements of this matrix. - - The upper triangle of this matrix. - - - - Puts the strictly upper triangle of this matrix into the result matrix. - - Where to store the lower triangle. - If is . - If the result matrix's dimensions are not the same as this matrix. - - - - Creates a new matrix and inserts the given column at the given index. - - The index of where to insert the column. - The column to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of columns. - If the size of != the number of rows. - - - - Creates a new matrix with the given column removed. - - The index of the column to remove. - A new matrix without the chosen column. - If is < zero or >= the number of columns. - - - - Copies the values of the given Vector to the specified column. - - The column to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given Vector to the specified sub-column. - - The column to copy the values to. - The row to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - - - - Copies the values of the given array to the specified column. - - The column to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of columns. - If the size of does not - equal the number of rows of this Matrix. - If the size of does not - equal the number of rows of this Matrix. - - - - Creates a new matrix and inserts the given row at the given index. - - The index of where to insert the row. - The row to insert. - A new matrix with the inserted column. - If is . - If is < zero or > the number of rows. - If the size of != the number of columns. - - - - Creates a new matrix with the given row removed. - - The index of the row to remove. - A new matrix without the chosen row. - If is < zero or >= the number of rows. - - - - Copies the values of the given Vector to the specified row. - - The row to copy the values to. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given Vector to the specified sub-row. - - The row to copy the values to. - The column to start copying to. - The number of elements to copy. - The vector to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of the given array to the specified row. - - The row to copy the values to. - The array to copy the values from. - If is . - If is less than zero, - or greater than or equal to the number of rows. - If the size of does not - equal the number of columns of this Matrix. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The column to start copying to. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The number of rows to copy. Must be positive. - The column to start copying to. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of a given matrix into a region in this matrix. - - The row to start copying to. - The row of the sub-matrix to start copying from. - The number of rows to copy. Must be positive. - The column to start copying to. - The column of the sub-matrix to start copying from. - The number of columns to copy. Must be positive. - The sub-matrix to copy from. - If: is - negative, or greater than or equal to the number of rows. - is negative, or greater than or equal to the number - of columns. - (columnIndex + columnLength) >= Columns - (rowIndex + rowLength) >= Rows - the size of is not at least x . - If or - is not positive. - - - - Copies the values of the given Vector to the diagonal. - - The vector to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Copies the values of the given array to the diagonal. - - The array to copy the values from. The length of the vector should be - Min(Rows, Columns). - If is . - If the length of does not - equal Min(Rows, Columns). - For non-square matrices, the elements of are copied to - this[i,i]. - - - - Returns the transpose of this matrix. - - The transpose of this matrix. - - - - Puts the transpose of this matrix into the result matrix. - - - - - Returns the conjugate transpose of this matrix. - - The conjugate transpose of this matrix. - - - - Puts the conjugate transpose of this matrix into the result matrix. - - - - - Permute the rows of a matrix according to a permutation. - - The row permutation to apply to this matrix. - - - - Permute the columns of a matrix according to a permutation. - - The column permutation to apply to this matrix. - - - - Concatenates this matrix with the given matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Concatenates this matrix with the given matrix and places the result into the result matrix. - - The matrix to concatenate. - The combined matrix. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Stacks this matrix on top of the given matrix and places the result into the result matrix. - - The matrix to stack this matrix upon. - The combined matrix. - If lower is . - If upper.Columns != lower.Columns. - - - - - - Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, - where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. - The values of off the off diagonal matrices/blocks are set to zero. - - The lower, right matrix. - If lower is . - the combined matrix - - - - - - Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. - - The lower, right matrix. - The combined matrix - If lower is . - If the result matrix is . - If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). - - - - - - Evaluates whether this matrix is symmetric. - - - - - Evaluates whether this matrix is Hermitian (conjugate symmetric). - - - - - Returns this matrix as a multidimensional array. - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - - A multidimensional containing the values of this matrix. - - - - Returns the matrix's elements as an array with the data laid out column by column (column major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the matrix's elements as an array with the data laid row by row (row major). - The returned array will be independent from this matrix. - A new memory block will be allocated for the array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns this matrix as array of row arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns this matrix as array of column arrays. - The returned arrays will be independent from this matrix. - A new memory block will be allocated for the arrays. - - - - - Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the matrix will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowMajorArray instead if you always need an independent array. - -
-            1, 2, 3
-            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
-            7, 8, 9
-            
- An array containing the matrix's elements. - - -
- - - Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToRowArrays instead if you always need an independent array. - - - - - Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. - Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. - Use ToColumnArrays instead if you always need an independent array. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix. - - - The enumerator will include all values, even if they are zero. - The ordering of the values is unspecified (not necessarily column-wise or row-wise). - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. - - - The enumerator returns a Tuple with the first two values being the row and column index - and the third value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. - - The column to start enumerating over. - The number of columns to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. - - The column to start enumerating over. - The number of columns to enumerating over. - - The enumerator returns a Tuple with the first value being the column index - and the second value being the value of the column at that index. - - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. - - The row to start enumerating over. - The number of rows to enumerating over. - - - - Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. - - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. - - The row to start enumerating over. - The number of rows to enumerating over. - - The enumerator returns a Tuple with the first value being the row index - and the second value being the value of the row at that index. - - - - - Applies a function to each value of this matrix and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value with its result. - The row and column indices of each value (zero-based) are passed as first arguments to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and replaces the value in the result matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - Applies a function to each value of this matrix and returns the results as a new matrix. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse matrices). - - - - - For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each row. - - - - - For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. - Returns an array with the resulting accumulator states for each column. - - - - - Applies a function f to each row vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Applies a function f to each column vector, threading an accumulator vector argument through the computation. - Returns the resulting accumulator vector. - - - - - Reduces all row vectors by applying a function between two of them, until only a single vector is left. - - - - - Reduces all column vectors by applying a function between two of them, until only a single vector is left. - - - - - Applies a function to each value pair of two matrices and replaces the value in the result vector. - - - - - Applies a function to each value pair of two matrices and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two matrices and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two matrices of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Matrix containing the same values of . - - The matrix to get the values from. - A matrix containing a the same values as . - If is . - - - - Negates each element of the matrix. - - The matrix to negate. - A matrix containing the negated values. - If is . - - - - Adds two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to add. - The right matrix to add. - The result of the addition. - If and don't have the same dimensions. - If or is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to add. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of the matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to add. - The right matrix to add. - The result of the addition. - If is . - - - - Subtracts two matrices together and returns the results. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts a scalar from each element of a matrix. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The left matrix to subtract. - The scalar value to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Subtracts each element of a matrix from a scalar. - - This operator will allocate new memory for the result. It will - choose the representation of the provided matrix. - The scalar value to subtract. - The right matrix to subtract. - The result of the subtraction. - If and don't have the same dimensions. - If or is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies a Matrix by a constant and returns the result. - - The matrix to multiply. - The constant to multiply the matrix by. - The result of the multiplication. - If is . - - - - Multiplies two matrices. - - This operator will allocate new memory for the result. It will - choose the representation of either or depending on which - is denser. - The left matrix to multiply. - The right matrix to multiply. - The result of multiplication. - If or is . - If the dimensions of or don't conform. - - - - Multiplies a Matrix and a Vector. - - The matrix to multiply. - The vector to multiply. - The result of multiplication. - If or is . - - - - Multiplies a Vector and a Matrix. - - The vector to multiply. - The matrix to multiply. - The result of multiplication. - If or is . - - - - Divides a scalar with a matrix. - - The scalar to divide. - The matrix. - The result of the division. - If is . - - - - Divides a matrix with a scalar. - - The matrix to divide. - The scalar value. - The result of the division. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of the matrix of the given divisor. - - The matrix whose elements we want to compute the modulus of. - The divisor to use. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the matrix. - - The dividend we want to compute the modulus of. - The matrix whose elements we want to use as divisor. - The result of the calculation - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two matrices. - - The matrix whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a matrix pointwise - - The input matrix - - - - - Computes the exponential of a matrix pointwise - - The input matrix - - - - - Computes the log of a matrix pointwise - - The input matrix - - - - - Computes the log10 of a matrix pointwise - - The input matrix - - - - - Computes the sin of a matrix pointwise - - The input matrix - - - - - Computes the cos of a matrix pointwise - - The input matrix - - - - - Computes the tan of a matrix pointwise - - The input matrix - - - - - Computes the asin of a matrix pointwise - - The input matrix - - - - - Computes the acos of a matrix pointwise - - The input matrix - - - - - Computes the atan of a matrix pointwise - - The input matrix - - - - - Computes the sinh of a matrix pointwise - - The input matrix - - - - - Computes the cosh of a matrix pointwise - - The input matrix - - - - - Computes the tanh of a matrix pointwise - - The input matrix - - - - - Computes the absolute value of a matrix pointwise - - The input matrix - - - - - Computes the floor of a matrix pointwise - - The input matrix - - - - - Computes the ceiling of a matrix pointwise - - The input matrix - - - - - Computes the rounded value of a matrix pointwise - - The input matrix - - - - - Computes the Cholesky decomposition for a matrix. - - The Cholesky decomposition object. - - - - Computes the LU decomposition for a matrix. - - The LU decomposition object. - - - - Computes the QR decomposition for a matrix. - - The type of QR factorization to perform. - The QR decomposition object. - - - - Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. - - The QR decomposition object. - - - - Computes the SVD decomposition for a matrix. - - Compute the singular U and VT vectors or not. - The SVD decomposition object. - - - - Computes the EVD decomposition for a matrix. - - The EVD decomposition object. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, AX = B, with A QR factorized. - - The right hand side , B. - The left hand side , X. - - - - Solves a system of linear equations, Ax = b, with A QR factorized. - - The right hand side vector, b. - The left hand side , x. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The result vector x. - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The result matrix X - The iterative solver to use. - Criteria to control when to stop iterating. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The preconditioner to use for approximations. - The result matrix X. - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. - - The solution vector b. - The iterative solver to use. - Criteria to control when to stop iterating. - The result vector x. - - - - Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. - - The solution matrix B. - The iterative solver to use. - Criteria to control when to stop iterating. - The result matrix X. - - - - Converts a matrix to single precision. - - - - - Converts a matrix to double precision. - - - - - Converts a matrix to single precision complex numbers. - - - - - Converts a matrix to double precision complex numbers. - - - - - Gets a single precision complex matrix with the real parts from the given matrix. - - - - - Gets a double precision complex matrix with the real parts from the given matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the real parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Gets a real matrix representing the imaginary parts of a complex matrix. - - - - - Existing data may not be all zeros, so clearing may be necessary - if not all of it will be overwritten anyway. - - - - - If existing data is assumed to be all zeros already, - clearing it may be skipped if applicable. - - - - - Allow skipping zero entries (without enforcing skipping them). - When enumerating sparse matrices this can significantly speed up operations. - - - - - Force applying the operation to all fields even if they are zero. - - - - - It is not known yet whether a matrix is symmetric or not. - - - - - A matrix is symmetric - - - - - A matrix is Hermitian (conjugate symmetric). - - - - - A matrix is not symmetric - - - - - Defines an that uses a cancellation token as stop criterion. - - - - - Initializes a new instance of the class. - - - - - Initializes a new instance of the class. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Stop criterion that delegates the status determination to a delegate. - - - - - Create a new instance of this criterion with a custom implementation. - - Custom implementation with the same signature and semantics as the DetermineStatus method. - - - - Determines the status of the iterative calculation by delegating it to the provided delegate. - Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - - - - Clones this criterion and its settings. - - - - - Monitors an iterative calculation for signs of divergence. - - - - - The maximum relative increase the residual may experience without triggering a divergence warning. - - - - - The number of iterations over which a residual increase should be tracked before issuing a divergence warning. - - - - - The status of the calculation - - - - - The array that holds the tracking information. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified maximum - relative increase and the specified minimum number of tracking iterations. - - The maximum relative increase that the residual may experience before a divergence warning is issued. - The minimum number of iterations over which the residual must grow before a divergence warning is issued. - - - - Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. - - Thrown if the Maximum is set to zero or below. - - - - Gets or sets the minimum number of iterations over which the residual must grow before - issuing a divergence warning. - - Thrown if the value is set to less than one. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Detect if solution is diverging - - true if diverging, otherwise false - - - - Gets required history Length - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Defines an that monitors residuals for NaN's. - - - - - The status of the calculation - - - - - The iteration number of the last iteration. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - The base interface for classes that provide stop criteria for iterative calculations. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current IIterationStopCriterion. Status is set to Status field of current object. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - is not a legal value. Status should be set in implementation. - - - - Resets the IIterationStopCriterion to the pre-calculation state. - - To implementers: Invoking this method should not clear the user defined - property values, only the state that is used to track the progress of the - calculation. - - - - Defines the interface for classes that solve the matrix equation Ax = b in - an iterative manner. - - - - - Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the - solution vector and x is the unknown vector. - - The coefficient matrix, A. - The solution vector, b - The result vector, x - The iterator to use to control when to stop iterating. - The preconditioner to use for approximations. - - - - Defines the interface for objects that can create an iterative solver with - specific settings. This interface is used to pass iterative solver creation - setup information around. - - - - - Gets the type of the solver that will be created by this setup object. - - - - - Gets type of preconditioner, if any, that will be created by this setup object. - - - - - Creates the iterative solver to be used. - - - - - Creates the preconditioner to be used by default (can be overwritten). - - - - - Gets the relative speed of the solver. - - Returns a value between 0 and 1, inclusive. - - - - Gets the relative reliability of the solver. - - Returns a value between 0 and 1 inclusive. - - - - The base interface for preconditioner classes. - - - - Preconditioners are used by iterative solvers to improve the convergence - speed of the solving process. Increase in convergence speed - is related to the number of iterations necessary to get a converged solution. - So while in general the use of a preconditioner means that the iterative - solver will perform fewer iterations it does not guarantee that the actual - solution time decreases given that some preconditioners can be expensive to - setup and run. - - - Note that in general changes to the matrix will invalidate the preconditioner - if the changes occur after creating the preconditioner. - - - - - - Initializes the preconditioner and loads the internal data structures. - - The matrix on which the preconditioner is based. - - - - Approximates the solution to the matrix equation Mx = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - - Defines an that monitors the numbers of iteration - steps as stop criterion. - - - - - The default value for the maximum number of iterations the process is allowed - to perform. - - - - - The maximum number of iterations the calculation is allowed to perform. - - - - - The status of the calculation - - - - - Initializes a new instance of the class with the default maximum - number of iterations. - - - - - Initializes a new instance of the class with the specified maximum - number of iterations. - - The maximum number of iterations the calculation is allowed to perform. - - - - Gets or sets the maximum number of iterations the calculation is allowed to perform. - - Thrown if the Maximum is set to a negative value. - - - - Returns the maximum number of iterations to the default. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Iterative Calculation Status - - - - - An iterator that is used to check if an iterative calculation should continue or stop. - - - - - The collection that holds all the stop criteria and the flag indicating if they should be added - to the child iterators. - - - - - The status of the iterator. - - - - - Initializes a new instance of the class with the default stop criteria. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Initializes a new instance of the class with the specified stop criteria. - - - The specified stop criteria. Only one stop criterion of each type can be passed in. None - of the stop criteria will be passed on to child iterators. - - - - - Gets the current calculation status. - - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual iterators may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Indicates to the iterator that the iterative process has been cancelled. - - - Does not reset the stop-criteria. - - - - - Resets the to the pre-calculation state. - - - - - Creates a deep clone of the current iterator. - - The deep clone of the current iterator. - - - - Defines an that monitors residuals as stop criterion. - - - - - The maximum value for the residual below which the calculation is considered converged. - - - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - The status of the calculation - - - - - The number of iterations since the residuals got below the maximum. - - - - - The iteration number of the last iteration. - - - - - Initializes a new instance of the class with the specified - maximum residual and minimum number of iterations. - - - The maximum value for the residual below which the calculation is considered converged. - - - The minimum number of iterations for which the residual has to be below the maximum before - the calculation is considered converged. - - - - - Gets or sets the maximum value for the residual below which the calculation is considered - converged. - - Thrown if the Maximum is set to a negative value. - - - - Gets or sets the minimum number of iterations for which the residual has to be - below the maximum before the calculation is considered converged. - - Thrown if the BelowMaximumFor is set to a value less than 1. - - - - Determines the status of the iterative calculation based on the stop criteria stored - by the current . Result is set into Status field. - - The number of iterations that have passed so far. - The vector containing the current solution values. - The right hand side vector. - The vector containing the current residual vectors. - - The individual stop criteria may internally track the progress of the calculation based - on the invocation of this method. Therefore this method should only be called if the - calculation has moved forwards at least one step. - - - - - Gets the current calculation status. - - - - - Resets the to the pre-calculation state. - - - - - Clones the current and its settings. - - A new instance of the class. - - - - Loads the available objects from the specified assembly. - - The assembly which will be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The type in the assembly which should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the specified assembly. - - The of the assembly that should be searched for setup objects. - If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - The types that should not be loaded. - - - - Loads the available objects from the Math.NET Numerics assembly. - - - - - A unit preconditioner. This preconditioner does not actually do anything - it is only used when running an without - a preconditioner. - - - - - The coefficient matrix on which this preconditioner operates. - Is used to check dimensions on the different vectors that are processed. - - - - - Initializes the preconditioner and loads the internal data structures. - - - The matrix upon which the preconditioner is based. - - If is not a square matrix. - - - - Approximates the solution to the matrix equation Ax = b. - - The right hand side vector. - The left hand side vector. Also known as the result vector. - - - If and do not have the same size. - - - - or - - - - If the size of is different the number of rows of the coefficient matrix. - - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Evaluate the row and column at a specific data index. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Gets or sets the value at the given row and column, with range checking. - - - The row of the element. - - - The column of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - The state array will not be modified, unless it is the same instance as the target array (which is allowed). - - - - The array containing the row indices of the existing rows. Element "i" of the array gives the index of the - element in the array that is first non-zero element in a row "i". - The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always - given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. - - - - - An array containing the column indices of the non-zero values. Element "j" of the array - is the number of the column in matrix that contains the j-th value in the array. - - - - - Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values - array using the row-major storage mapping described in a compressed sparse row (CSR) format. - - - - - Gets the number of non zero elements in the matrix. - - The number of non zero elements. - - - - True if the matrix storage format is dense. - - - - - True if all fields of this matrix can be set to any value. - False if some fields are fixed, like on a diagonal matrix. - - - - - True if the specified field can be set to any value. - False if the field is fixed, like an off-diagonal field on a diagonal matrix. - - - - - Retrieves the requested element without range checking. - - - The row of the element. - - - The column of the element. - - - The requested element. - - Not range-checked. - - - - Sets the element without range checking. - - The row of the element. - The column of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Delete value from internal storage - - Index of value in nonZeroValues array - Row number of matrix - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Find item Index in nonZeroValues array - - Matrix row index - Matrix column index - Item index - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Array that contains the indices of the non-zero values. - - - - - Array that contains the non-zero elements of the vector. - - - - - Gets the number of non-zero elements in the vector. - - - - - True if the vector storage format is dense. - - - - - Retrieves the requested element without range checking. - - - - - Sets the element without range checking. - - - - - Calculates the amount with which to grow the storage array's if they need to be - increased in size. - - The amount grown. - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - True if the vector storage format is dense. - - - - - Gets or sets the value at the given index, with range checking. - - - The index of the element. - - The value to get or set. - This method is ranged checked. and - to get and set values without range checking. - - - - Retrieves the requested element without range checking. - - The index of the element. - The requested element. - Not range-checked. - - - - Sets the element without range checking. - - The index of the element. - The value to set the element to. - WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. - - - - Indicates whether the current object is equal to another object of the same type. - - - An object to compare with this object. - - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to the current . - - - true if the specified is equal to the current ; otherwise, false. - - The to compare with the current . - - - - Serves as a hash function for a particular type. - - - A hash code for the current . - - - - - Defines the generic class for Vector classes. - - Supported data types are double, single, , and . - - - - The zero value for type T. - - - - - The value of 1.0 for type T. - - - - - Negates vector and save result to - - Target vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar denominator to use. - The vector to store the result of the division. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar numerator to use. - The vector to store the result of the division. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the division. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise raise this vector to an exponent vector and store the result into the result vector. - - The exponent vector to raise this vector values to. - The vector to store the result of the pointwise power. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The result of the modulus. - - - - Pointwise applies the exponential function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Pointwise applies the natural logarithm function to each value and stores the result into the result vector. - - The vector to store the result. - - - - Adds a scalar to each element of the vector. - - The scalar to add. - A copy of the vector with the scalar added. - - - - Adds a scalar to each element of the vector and stores the result in the result vector. - - The scalar to add. - The vector to store the result of the addition. - If this vector and are not the same size. - - - - Adds another vector to this vector. - - The vector to add to this one. - A new vector containing the sum of both vectors. - If this vector and are not the same size. - - - - Adds another vector to this vector and stores the result into the result vector. - - The vector to add to this one. - The vector to store the result of the addition. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Subtracts a scalar from each element of the vector. - - The scalar to subtract. - A new vector containing the subtraction of this vector and the scalar. - - - - Subtracts a scalar from each element of the vector and stores the result in the result vector. - - The scalar to subtract. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Subtracts each element of the vector from a scalar. - - The scalar to subtract from. - A new vector containing the subtraction of the scalar and this vector. - - - - Subtracts each element of the vector from a scalar and stores the result in the result vector. - - The scalar to subtract from. - The vector to store the result of the subtraction. - If this vector and are not the same size. - - - - Returns a negated vector. - - The negated vector. - Added as an alternative to the unary negation operator. - - - - Negates vector and save result to - - Target vector - - - - Subtracts another vector from this vector. - - The vector to subtract from this one. - A new vector containing the subtraction of the two vectors. - If this vector and are not the same size. - - - - Subtracts another vector to this vector and stores the result into the result vector. - - The vector to subtract from this one. - The vector to store the result of the subtraction. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Return vector with complex conjugate values of the source vector - - Conjugated vector - - - - Complex conjugates vector and save result to - - Target vector - - - - Multiplies a scalar to each element of the vector. - - The scalar to multiply. - A new vector that is the multiplication of the vector and the scalar. - - - - Multiplies a scalar to each element of the vector and stores the result in the result vector. - - The scalar to multiply. - The vector to store the result of the multiplication. - If this vector and are not the same size. - - - - Computes the dot product between this vector and another vector. - - The other vector. - The sum of a[i]*b[i] for all i. - If is not of the same size. - - - - - Computes the dot product between the conjugate of this vector and another vector. - - The other vector. - The sum of conj(a[i])*b[i] for all i. - If is not of the same size. - If is . - - - - - Divides each element of the vector by a scalar. - - The scalar to divide with. - A new vector that is the division of the vector and the scalar. - - - - Divides each element of the vector by a scalar and stores the result in the result vector. - - The scalar to divide with. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Divides a scalar by each element of the vector. - - The scalar to divide. - A new vector that is the division of the vector and the scalar. - - - - Divides a scalar by each element of the vector and stores the result in the result vector. - - The scalar to divide. - The vector to store the result of the division. - If this vector and are not the same size. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the canonical modulus, where the result has the sign of the divisor, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector containing the result. - - - - Computes the remainder (vector % divisor), where the result has the sign of the dividend, - for each element of the vector for the given divisor. - - The scalar denominator to use. - A vector to store the results in. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector containing the result. - - - - Computes the remainder (dividend % vector), where the result has the sign of the dividend, - for the given dividend for each element of the vector. - - The scalar numerator to use. - A vector to store the results in. - - - - Pointwise multiplies this vector with another vector. - - The vector to pointwise multiply with this one. - A new vector which is the pointwise multiplication of the two vectors. - If this vector and are not the same size. - - - - Pointwise multiplies this vector with another vector and stores the result into the result vector. - - The vector to pointwise multiply with this one. - The vector to store the result of the pointwise multiplication. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector. - - The pointwise denominator vector to use. - A new vector which is the pointwise division of the two vectors. - If this vector and are not the same size. - - - - Pointwise divide this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise division. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - The matrix to store the result into. - If this vector and are not the same size. - - - - Pointwise raise this vector to an exponent and store the result into the result vector. - - The exponent to raise this vector values to. - - - - Pointwise raise this vector to an exponent. - - The exponent to raise this vector values to. - The vector to store the result into. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise canonical modulus, where the result has the sign of the divisor, - of this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise modulus. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - of this vector with another vector. - - The pointwise denominator vector to use. - If this vector and are not the same size. - - - - Pointwise remainder (% operator), where the result has the sign of the dividend, - this vector with another vector and stores the result into the result vector. - - The pointwise denominator vector to use. - The vector to store the result of the pointwise remainder. - If this vector and are not the same size. - If this vector and are not the same size. - - - - Helper function to apply a unary function to a vector. The function - f modifies the vector given to it in place. Before its - called, a copy of the 'this' vector with the same dimension is - first created, then passed to f. The copy is returned as the result - - Function which takes a vector, modifies it in place and returns void - New instance of vector which is the result - - - - Helper function to apply a unary function which modifies a vector - in place. - - Function which takes a vector, modifies it in place and returns void - The vector where the result is to be stored - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes a scalar and - a vector and modifies the latter in place. A copy of the "this" - vector is therefore first made and then passed to f together with - the scalar argument. The copy is then returned as the result - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The resulting vector - - - - Helper function to apply a binary function which takes a scalar and - a vector, modifies the latter in place and returns void. - - Function which takes a scalar and a vector, modifies the vector in place and returns void - The scalar to be passed to the function - The vector where the result will be placed - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the latter in place. A copy of the "this" vector is - first made and then passed to f together with the other vector. The - copy is then returned as the result - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Helper function to apply a binary function which takes two vectors - and modifies the second one in place - - Function which takes two vectors, modifies the second in place and returns void - The other vector to be passed to the function as argument. It is not modified - The resulting vector - If this vector and are not the same size. - - - - Pointwise applies the exponent function to each value. - - - - - Pointwise applies the exponent function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the natural logarithm function to each value. - - - - - Pointwise applies the natural logarithm function to each value. - - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the abs function to each value - - - - - Pointwise applies the abs function to each value - - The vector to store the result - - - - Pointwise applies the acos function to each value - - - - - Pointwise applies the acos function to each value - - The vector to store the result - - - - Pointwise applies the asin function to each value - - - - - Pointwise applies the asin function to each value - - The vector to store the result - - - - Pointwise applies the atan function to each value - - - - - Pointwise applies the atan function to each value - - The vector to store the result - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - - - - Pointwise applies the atan2 function to each value of the current - vector and a given other vector being the 'x' of atan2 and the - 'this' vector being the 'y' - - - The vector to store the result - - - - Pointwise applies the ceiling function to each value - - - - - Pointwise applies the ceiling function to each value - - The vector to store the result - - - - Pointwise applies the cos function to each value - - - - - Pointwise applies the cos function to each value - - The vector to store the result - - - - Pointwise applies the cosh function to each value - - - - - Pointwise applies the cosh function to each value - - The vector to store the result - - - - Pointwise applies the floor function to each value - - - - - Pointwise applies the floor function to each value - - The vector to store the result - - - - Pointwise applies the log10 function to each value - - - - - Pointwise applies the log10 function to each value - - The vector to store the result - - - - Pointwise applies the round function to each value - - - - - Pointwise applies the round function to each value - - The vector to store the result - - - - Pointwise applies the sign function to each value - - - - - Pointwise applies the sign function to each value - - The vector to store the result - - - - Pointwise applies the sin function to each value - - - - - Pointwise applies the sin function to each value - - The vector to store the result - - - - Pointwise applies the sinh function to each value - - - - - Pointwise applies the sinh function to each value - - The vector to store the result - - - - Pointwise applies the sqrt function to each value - - - - - Pointwise applies the sqrt function to each value - - The vector to store the result - - - - Pointwise applies the tan function to each value - - - - - Pointwise applies the tan function to each value - - The vector to store the result - - - - Pointwise applies the tanh function to each value - - - - - Pointwise applies the tanh function to each value - - The vector to store the result - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. - - The other vector - - - - Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. - - The other vector - The matrix to store the result of the product. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute minimum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - - - - Pointwise applies the absolute maximum with a scalar to each value. - - The scalar value to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute minimum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - - - - Pointwise applies the absolute maximum with the values of another vector to each value. - - The vector with the values to compare to. - The vector to store the result. - If this vector and are not the same size. - - - - Calculates the L1 norm of the vector, also known as Manhattan norm. - - The sum of the absolute values. - - - - Calculates the L2 norm of the vector, also known as Euclidean norm. - - The square root of the sum of the squared values. - - - - Calculates the infinity norm of the vector. - - The maximum absolute value. - - - - Computes the p-Norm. - - The p value. - Scalar ret = (sum(abs(this[i])^p))^(1/p) - - - - Normalizes this vector to a unit vector with respect to the p-norm. - - The p value. - This vector normalized to a unit vector with respect to the p-norm. - - - - Returns the value of the absolute minimum element. - - The value of the absolute minimum element. - - - - Returns the index of the absolute minimum element. - - The index of absolute minimum element. - - - - Returns the value of the absolute maximum element. - - The value of the absolute maximum element. - - - - Returns the index of the absolute maximum element. - - The index of absolute maximum element. - - - - Returns the value of maximum element. - - The value of maximum element. - - - - Returns the index of the maximum element. - - The index of maximum element. - - - - Returns the value of the minimum element. - - The value of the minimum element. - - - - Returns the index of the minimum element. - - The index of minimum element. - - - - Computes the sum of the vector's elements. - - The sum of the vector's elements. - - - - Computes the sum of the absolute value of the vector's elements. - - The sum of the absolute value of the vector's elements. - - - - Indicates whether the current object is equal to another object of the same type. - - An object to compare with this object. - - true if the current object is equal to the parameter; otherwise, false. - - - - - Determines whether the specified is equal to this instance. - - The to compare with this instance. - - true if the specified is equal to this instance; otherwise, false. - - - - - Returns a hash code for this instance. - - - A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Returns an enumerator that iterates through the collection. - - - A that can be used to iterate through the collection. - - - - - Returns an enumerator that iterates through a collection. - - - An object that can be used to iterate through the collection. - - - - - Returns a string that describes the type, dimensions and shape of this vector. - - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Character to use to print if there is not enough space to print all entries. Typical value: "..". - Character to use to separate two columns on a line. Typical value: " " (2 spaces). - Character to use to separate two rows/lines. Typical value: Environment.NewLine. - Function to provide a string for any given entry value. - - - - Returns a string that represents the content of this vector, column by column. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that represents the content of this vector, column by column. - - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector, column by column and with a type header. - - Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. - Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. - Floating point format string. Can be null. Default value: G6. - Format provider or culture. Can be null. - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - - - - - Returns a string that summarizes this vector. - The maximum number of cells can be configured in the class. - The format string is ignored. - - - - - Initializes a new instance of the Vector class. - - - - - Gets the raw vector data storage. - - - - - Gets the length or number of dimensions of this vector. - - - - Gets or sets the value at the given . - The index of the value to get or set. - The value of the vector at the given . - If is negative or - greater than the size of the vector. - - - Gets the value at the given without range checking.. - The index of the value to get or set. - The value of the vector at the given . - - - Sets the at the given without range checking.. - The index of the value to get or set. - The value to set. - - - - Resets all values to zero. - - - - - Sets all values of a subvector to zero. - - - - - Set all values whose absolute value is smaller than the threshold to zero, in-place. - - - - - Set all values that meet the predicate to zero, in-place. - - - - - Returns a deep-copy clone of the vector. - - A deep-copy clone of the vector. - - - - Set the values of this vector to the given values. - - The array containing the values to use. - If is . - If is not the same size as this vector. - - - - Copies the values of this vector into the target vector. - - The vector to copy elements into. - If is . - If is not the same size as this vector. - - - - Creates a vector containing specified elements. - - The first element to begin copying from. - The number of elements to copy. - A vector containing a copy of the specified elements. - If is not positive or - greater than or equal to the size of the vector. - If + is greater than or equal to the size of the vector. - - If is not positive. - - - - Copies the values of a given vector into a region in this vector. - - The field to start copying to - The number of fields to copy. Must be positive. - The sub-vector to copy from. - If is - - - - Copies the requested elements from this vector to another. - - The vector to copy the elements to. - The element to start copying from. - The element to start copying to. - The number of elements to copy. - - - - Returns the data contained in the vector as an array. - The returned array will be independent from this vector. - A new memory block will be allocated for the array. - - The vector's data as an array. - - - - Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. - Otherwise returns null. Changes to the returned array and the vector will affect each other. - Use ToArray instead if you always need an independent array. - - - - - Create a matrix based on this vector in column form (one single column). - - - This vector as a column matrix. - - - - - Create a matrix based on this vector in row form (one single row). - - - This vector as a row matrix. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector. - - - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Returns an IEnumerable that can be used to iterate through all values of the vector and their index. - - - The enumerator returns a Tuple with the first value being the element index - and the second value being the value of the element at that index. - The enumerator will include all values, even if they are zero. - - - - - Applies a function to each value of this vector and replaces the value with its result. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value with its result. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and replaces the value in the result vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value of this vector and returns the results as a new vector. - The index of each value (zero-based) is passed as first argument to the function. - If forceMapZero is not set to true, zero values may or may not be skipped depending - on the actual data storage implementation (relevant mostly for sparse vectors). - - - - - Applies a function to each value pair of two vectors and replaces the value in the result vector. - - - - - Applies a function to each value pair of two vectors and returns the results as a new vector. - - - - - Applies a function to update the status with each value pair of two vectors and returns the resulting status. - - - - - Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all elements satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns true if all element pairs of two vectors of the same size satisfy a predicate. - Zero elements may be skipped on sparse data structures if allowed (default). - - - - - Returns a Vector containing the same values of . - - This method is included for completeness. - The vector to get the values from. - A vector containing the same values as . - If is . - - - - Returns a Vector containing the negated values of . - - The vector to get the values from. - A vector containing the negated values as . - If is . - - - - Adds two Vectors together and returns the results. - - One of the vectors to add. - The other vector to add. - The result of the addition. - If and are not the same size. - If or is . - - - - Adds a scalar to each element of a vector. - - The vector to add to. - The scalar value to add. - The result of the addition. - If is . - - - - Adds a scalar to each element of a vector. - - The scalar value to add. - The vector to add to. - The result of the addition. - If is . - - - - Subtracts two Vectors and returns the results. - - The vector to subtract from. - The vector to subtract. - The result of the subtraction. - If and are not the same size. - If or is . - - - - Subtracts a scalar from each element of a vector. - - The vector to subtract from. - The scalar value to subtract. - The result of the subtraction. - If is . - - - - Subtracts each element of a vector from a scalar. - - The scalar value to subtract from. - The vector to subtract. - The result of the subtraction. - If is . - - - - Multiplies a vector with a scalar. - - The vector to scale. - The scalar value. - The result of the multiplication. - If is . - - - - Multiplies a vector with a scalar. - - The scalar value. - The vector to scale. - The result of the multiplication. - If is . - - - - Computes the dot product between two Vectors. - - The left row vector. - The right column vector. - The dot product between the two vectors. - If and are not the same size. - If or is . - - - - Divides a scalar with a vector. - - The scalar to divide. - The vector. - The result of the division. - If is . - - - - Divides a vector with a scalar. - - The vector to divide. - The scalar value. - The result of the division. - If is . - - - - Pointwise divides two Vectors. - - The vector to divide. - The other vector. - The result of the division. - If and are not the same size. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of each element of the vector of the given divisor. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If is . - - - - Computes the remainder (% operator), where the result has the sign of the dividend, - of the given dividend of each element of the vector. - - The dividend we want to compute the remainder of. - The vector whose elements we want to use as divisor. - If is . - - - - Computes the pointwise remainder (% operator), where the result has the sign of the dividend, - of each element of two vectors. - - The vector whose elements we want to compute the remainder of. - The divisor to use. - If and are not the same size. - If is . - - - - Computes the sqrt of a vector pointwise - - The input vector - - - - - Computes the exponential of a vector pointwise - - The input vector - - - - - Computes the log of a vector pointwise - - The input vector - - - - - Computes the log10 of a vector pointwise - - The input vector - - - - - Computes the sin of a vector pointwise - - The input vector - - - - - Computes the cos of a vector pointwise - - The input vector - - - - - Computes the tan of a vector pointwise - - The input vector - - - - - Computes the asin of a vector pointwise - - The input vector - - - - - Computes the acos of a vector pointwise - - The input vector - - - - - Computes the atan of a vector pointwise - - The input vector - - - - - Computes the sinh of a vector pointwise - - The input vector - - - - - Computes the cosh of a vector pointwise - - The input vector - - - - - Computes the tanh of a vector pointwise - - The input vector - - - - - Computes the absolute value of a vector pointwise - - The input vector - - - - - Computes the floor of a vector pointwise - - The input vector - - - - - Computes the ceiling of a vector pointwise - - The input vector - - - - - Computes the rounded value of a vector pointwise - - The input vector - - - - - Converts a vector to single precision. - - - - - Converts a vector to double precision. - - - - - Converts a vector to single precision complex numbers. - - - - - Converts a vector to double precision complex numbers. - - - - - Gets a single precision complex vector with the real parts from the given vector. - - - - - Gets a double precision complex vector with the real parts from the given vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the real parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Gets a real vector representing the imaginary parts of a complex vector. - - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response vector Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - - Predictor matrix X - Response matrix Y - The direct method to be used to compute the regression. - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - The direct method to be used to compute the regression. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses the cholesky decomposition of the normal equations. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response vector Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Predictor matrix X - Response matrix Y - Best fitting vector for model parameters β - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - List of predictor-arrays. - List of responses - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. - Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. - - Sequence of predictor-arrays and their response. - True if an intercept should be added as first artificial predictor value. Default = false. - Best fitting list of model parameters β for each element in the predictor-arrays. - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, - returning its best fitting parameters as (a, b) tuple, - where a is the intercept and b the slope. - - Predictor-Response samples as tuples - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor (independent) - Response (dependent) - - - - Least-Squares fitting the points (x,y) to a line y : x -> b*x, - returning its best fitting parameter b, - where the intercept is zero and b the slope. - - Predictor-Response samples as tuples - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response matrix Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - - - - Weighted Linear Regression using normal equations. - - Predictor matrix X - Response vector Y - Weight matrix W, usually diagonal with an entry for each predictor (row). - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Weighted Linear Regression using normal equations. - - List of sample vectors (predictor) together with their response. - List of weights, one for each sample. - True if an intercept should be added as first artificial predictor value. Default = false. - - - - Locally-Weighted Linear Regression using normal equations. - - - - - Locally-Weighted Linear Regression using normal equations. - - - - - First Order AB method(same as Forward Euler) - - Initial value - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Second Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Third Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - Fourth Order AB Method - - Initial value 1 - Start Time - End Time - Size of output array(the larger, the finer) - ode model - approximation with size N - - - - ODE Solver Algorithms - - - - - Second Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta method - - initial value - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Second Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Fourth Order Runge-Kutta to solve ODE SYSTEM - - initial vector - start time - end time - Size of output array(the larger, the finer) - ode function - approximations - - - - Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems - http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz - - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The lower bound - The upper bound - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems - - - - - Creates BFGS minimizer - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - - Creates a base class for BFGS minimization - - - - - Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima - See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm - Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp - - - - - Finds a minimum of a function by the BFGS quasi-Newton method - This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian - - An initial guess - Evaluates the function at a point - Evaluates the gradient of the function at a point - The minimum found - - - - Objective function with a frozen evaluation that must not be changed from the outside. - - - - Create a new unevaluated and independent copy of this objective function - - - - Objective function with a mutable evaluation. - - - - Create a new independent copy of this objective function, evaluated at the same point. - - - - Get the y-values of the observations. - - - - - Get the values of the weights for the observations. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the values of the parameters. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector. G = J'(y - f(x; p)) - - - - - Get the approximated Hessian matrix. H = J'J - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Get the degree of freedom. - - - - - The scale factor for initial mu - - - - - Non-linear least square fitting by the Levenberg-Marduardt algorithm. - - The objective function, including model, observations, and parameter bounds. - The initial guess values. - The initial damping parameter of mu. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for L2 norm of the residuals. - The max iterations. - The result of the Levenberg-Marquardt minimization - - - - Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm - - - - - - Creates L-BFGS minimizer - - Numbers of gradients and steps to store. - - - - Find the minimum of the objective function given lower and upper bounds - - The objective function, must support a gradient - The initial guess - The MinimizationResult which contains the minimum and the ExitCondition - - - - Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe - Conditions are - i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) - ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) - where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. - - Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - references: - http://en.wikipedia.org/wiki/Wolfe_conditions - http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - - - - Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - - - - The objective function being optimized, evaluated at the starting point of the search - Search direction - Initial size of the step in the search direction - The upper bound - - - - Creates a base class for minimization - - The gradient tolerance - The parameter tolerance - The function progress tolerance - The maximum number of iterations - - - - Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. - Called fminsearch() in Matlab. A description of the algorithm can be found at - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - or - https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method - - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Finds the minimum of the objective function without an initial perturbation, the default values used - by fminsearch() in Matlab are used instead - http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 - - The objective function, no gradient or hessian needed - The initial guess - The minimum point - - - - Finds the minimum of the objective function with an initial perturbation - - The objective function, no gradient or hessian needed - The initial guess - The initial perturbation - The minimum point - - - - Evaluate the objective function at each vertex to create a corresponding - list of error values for each vertex - - - - - - - - Check whether the points in the error profile have so little range that we - consider ourselves to have converged - - - - - - - - - Examine all error values to determine the ErrorProfile - - - - - - - Construct an initial simplex, given starting guesses for the constants, and - initial step sizes for each dimension - - - - - - - Test a scaling operation of the high point, and replace it if it is an improvement - - - - - - - - - - - Contract the simplex uniformly around the lowest point - - - - - - - - - Compute the centroid of all points except the worst - - - - - - - - The value of the constant - - - - - Returns the best fit parameters. - - - - - Returns the standard errors of the corresponding parameters - - - - - Returns the y-values of the fitted model that correspond to the independent values. - - - - - Returns the covariance matrix at minimizing point. - - - - - Returns the correlation matrix at minimizing point. - - - - - The stopping threshold for the function value or L2 norm of the residuals. - - - - - The stopping threshold for L2 norm of the change of the parameters. - - - - - The stopping threshold for infinity norm of the gradient. - - - - - The maximum number of iterations. - - - - - The lower bound of the parameters. - - - - - The upper bound of the parameters. - - - - - The scale factors for the parameters. - - - - - Objective function where neither Gradient nor Hessian is available. - - - - - Objective function where the Gradient is available. Greedy evaluation. - - - - - Objective function where the Gradient is available. Lazy evaluation. - - - - - Objective function where the Hessian is available. Greedy evaluation. - - - - - Objective function where the Hessian is available. Lazy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Greedy evaluation. - - - - - Objective function where both Gradient and Hessian are available. Lazy evaluation. - - - - - Objective function where neither first nor second derivative is available. - - - - - Objective function where the first derivative is available. - - - - - Objective function where the first and second derivatives are available. - - - - - objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective model with a user supplied jacobian for non-linear least squares regression. - - - - - Objective model for non-linear least squares regression. - - - - - Objective function with a user supplied jacobian for nonlinear least squares regression. - - - - - Objective function for nonlinear least squares regression. - The numerical jacobian with accuracy order is used. - - - - - Adapts an objective function with only value implemented - to provide a gradient as well. Gradient calculation is - done using the finite difference method, specifically - forward differences. - - For each gradient computed, the algorithm requires an - additional number of function evaluations equal to the - functions's number of input parameters. - - - - - Set or get the values of the independent variable. - - - - - Set or get the values of the observations. - - - - - Set or get the values of the weights for the observations. - - - - - Get whether parameters are fixed or free. - - - - - Get the number of observations. - - - - - Get the number of unknown parameters. - - - - - Get the degree of freedom - - - - - Get the number of calls to function. - - - - - Get the number of calls to jacobian. - - - - - Set or get the values of the parameters. - - - - - Get the y-values of the fitted model that correspond to the independent values. - - - - - Get the residual sum of squares. - - - - - Get the Gradient vector of x and p. - - - - - Get the Hessian matrix of x and p, J'WJ - - - - - Set observed data to fit. - - - - - Set parameters and bounds. - - The initial values of parameters. - The list to the parameters fix or free. - - - - Non-linear least square fitting by the trust region dogleg algorithm. - - - - - The trust region subproblem. - - - - - The stopping threshold for the trust region radius. - - - - - Non-linear least square fitting by the trust-region algorithm. - - The objective model, including function, jacobian, observations, and parameter bounds. - The subproblem - The initial guess values. - The stopping threshold for L2 norm of the residuals. - The stopping threshold for infinity norm of the gradient vector. - The stopping threshold for L2 norm of the change of parameters. - The stopping threshold for trust region radius - The max iterations. - - - - - Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. - - - - - Class to represent a permutation for a subset of the natural numbers. - - - - - Entry _indices[i] represents the location to which i is permuted to. - - - - - Initializes a new instance of the Permutation class. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - - - - Gets the number of elements this permutation is over. - - - - - Computes where permutes too. - - The index to permute from. - The index which is permuted to. - - - - Computes the inverse of the permutation. - - The inverse of the permutation. - - - - Construct an array from a sequence of inversions. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - The set of inversions to construct the permutation from. - A permutation generated from a sequence of inversions. - - - - Construct a sequence of inversions from the permutation. - - - From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be - encoded using the array [22244]. - - A sequence of inversions. - - - - Checks whether the array represents a proper permutation. - - An array which represents where each integer is permuted too: indices[i] represents that integer i - is permuted to location indices[i]. - True if represents a proper permutation, false otherwise. - - - - A single-variable polynomial with real-valued coefficients and non-negative exponents. - - - - - The coefficients of the polynomial in a - - - - - Only needed for the ToString method - - - - - Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. - The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. - - - - - Create a zero-polynomial with a coefficient array of the given length. - An array of length N can support polynomials of a degree of at most N-1. - - Length of the coefficient array - - - - Create a zero-polynomial - - - - - Create a constant polynomial. - Example: 3.0 -> "p : x -> 3.0" - - The coefficient of the "x^0" monomial. - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as array - - - - Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). - Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". - - Polynomial coefficients as enumerable - - - - Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k - - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered ascending by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at point x. - - The location where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Evaluate a polynomial at points z. - - The locations where to evaluate the polynomial at. - - - - Calculates the complex roots of the Polynomial by eigenvalue decomposition - - a vector of complex numbers with the roots - - - - Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. - - Eigenvalue matrix A - This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix - - - - Addition of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a polynomial and a scalar. - - - - - Subtraction of two Polynomials (point-wise). - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Addition of a scalar from a polynomial. - - - - - Addition of a polynomial from a scalar. - - - - - Negation of a polynomial. - - - - - Multiplies a polynomial by a polynomial (convolution) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Scales a polynomial by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Scales a polynomial by division by a scalar - - Polynomial - Scalar value - Resulting Polynomial - - - - Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r - - Left polynomial - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Point-wise division of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Point-wise multiplication of two Polynomials - - Left Polynomial - Right Polynomial - Resulting Polynomial - - - - Division of two polynomials returning the quotient-with-remainder of the two polynomials given - - Right polynomial - A tuple holding quotient in first and remainder in second - - - - Addition of two Polynomials (piecewise) - - Left polynomial - Right polynomial - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - adds a scalar to a polynomial. - - Scalar value - Polynomial - Resulting Polynomial - - - - Subtraction of two polynomial. - - Left polynomial - Right polynomial - Resulting Polynomial - - - - Subtracts a scalar from a polynomial. - - Polynomial - Scalar value - Resulting Polynomial - - - - Subtracts a polynomial from a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Negates a polynomial. - - Polynomial - Resulting Polynomial - - - - Multiplies a polynomial by a polynomial (convolution). - - Left polynomial - Right polynomial - resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Polynomial - Scalar value - Resulting Polynomial - - - - Multiplies a polynomial by a scalar. - - Scalar value - Polynomial - Resulting Polynomial - - - - Divides a polynomial by scalar value. - - Polynomial - Scalar value - Resulting Polynomial - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". - - - - - Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". - - - - - Creates a new object that is a copy of the current instance. - - - A new object that is a copy of this instance. - - - - - Utilities for working with floating point numbers. - - - - Useful links: - - - http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic - - - http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon - - - - - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The relative accuracy required for being almost equal. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The number of decimal places on which the values must be compared. Must be 1 or larger. - - - - Compares two doubles and determines which double is bigger. - a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. - - The first value. - The second value. - The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is larger than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is larger than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The number of decimal places. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the specified number of decimal places or not. - - The first value. - The second value. - The relative accuracy required for being almost equal. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Compares two doubles and determines if the first value is smaller than the second - value to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. - true if the first value is smaller than the second value; otherwise false. - - - - Checks if a given double values is finite, i.e. neither NaN nor inifnity - - The value to be checked fo finitenes. - - - - The number of binary digits used to represent the binary number for a double precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - The number of binary digits used to represent the binary number for a single precision floating - point value. i.e. there are this many digits used to represent the - actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Demmel and used in LAPACK and Scilab. - - - - - Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). - According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. - - - - - Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - On a standard machine this is equivalent to `DoublePrecision`. - - - - - Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - On a standard machine this is equivalent to `PositiveDoublePrecision`. - - - - - The number of significant decimal places of double-precision floating numbers (64 bit). - - - - - The number of significant decimal places of single-precision floating numbers (32 bit). - - - - - Value representing 10 * 2^(-53) = 1.11022302462516E-15 - - - - - Value representing 10 * 2^(-24) = 5.96046447753906E-07 - - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the magnitude of the number. - - The value. - The magnitude of the number. - - - - Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. - - The value. - The value of the number. - - - - Returns a 'directional' long value. This is a long value which acts the same as a double, - e.g. a negative double value will return a negative double value starting at 0 and going - more negative as the double value gets more negative. - - The input double value. - A long value which is roughly the equivalent of the double value. - - - - Returns a 'directional' int value. This is a int value which acts the same as a float, - e.g. a negative float value will return a negative int value starting at 0 and going - more negative as the float value gets more negative. - - The input float value. - An int value which is roughly the equivalent of the double value. - - - - Increments a floating point number to the next bigger number representable by the data type. - - The value which needs to be incremented. - How many times the number should be incremented. - - The incrementation step length depends on the provided value. - Increment(double.MaxValue) will return positive infinity. - - The next larger floating point value. - - - - Decrements a floating point number to the next smaller number representable by the data type. - - The value which should be decremented. - How many times the number should be decremented. - - The decrementation step length depends on the provided value. - Decrement(double.MinValue) will return negative infinity. - - The next smaller floating point value. - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The maximum count of numbers between the zero and the number . - - Zero if || is fewer than numbers from zero, otherwise. - - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero, according to the specified absolute accuracy. - - The real number to coerce to zero, if it is almost zero. - The absolute threshold for to consider it as zero. - Zero if || is smaller than , otherwise. - - Thrown if is smaller than zero. - - - - - Forces small numbers near zero to zero. - - The real number to coerce to zero, if it is almost zero. - Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. - - - - Determines the range of floating point numbers that will match the specified value with the given tolerance. - - The value. - The ulps difference. - - Thrown if is smaller than zero. - - Tuple of the bottom and top range ends. - - - - Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is - always bigger than the value) - - The value. - The ulps difference. - The maximum floating point number which is larger than the given . - - - - Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is - always smaller than the value) - - The value. - The ulps difference. - The minimum floating point number which is smaller than the given . - - - - Determines the range of ulps that will match the specified value with the given tolerance. - - The value. - The relative difference. - - Thrown if is smaller than zero. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Tuple with the number of ULPS between the value and the value - relativeDifference as first, - and the number of ULPS between the value and the value + relativeDifference as second value. - - - - - Evaluates the count of numbers between two double numbers - - The first parameter. - The second parameter. - The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. - The number of floating point values between and . - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - Thrown if is double.PositiveInfinity or double.NegativeInfinity. - - - Thrown if is double.NaN. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive double or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - - Relative Epsilon (positive float or NaN). - - Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive double or NaN) - Evaluates the positive epsilon. See also - - - - - Evaluates the minimum distance to the next distinguishable number near the argument value. - - The value used to determine the minimum distance. - Relative Epsilon (positive float or NaN) - Evaluates the positive epsilon. See also - - - - - Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Demmel. - - Positive Machine epsilon - - - - Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. - This is also known as unit roundoff error. According to the definition of Prof. Higham. - - Machine epsilon - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum absolute error. - - The first value. - The second value. - The absolute accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal - within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - True if both doubles are almost equal up to the specified maximum error, false otherwise. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two doubles and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two complex and determines if they are equal within - the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two real numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Checks whether two Complex numbers are almost equal. - - The first number - The second number - true if the two values differ by no more than 10 * 2^(-52); false otherwise. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - - - The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The norm of the first value (can be negative). - The norm of the second value (can be negative). - The norm of the difference of the two values (can be negative). - The number of decimal places. - Thrown if is smaller than zero. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - - - The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by - two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between - 0.005 and 0.015, but not 0.02 and not 0.00 - - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the - number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers - are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - - - Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps - between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance - of 1 is passed then the result will be true only if the two numbers have the same binary representation - OR if they are two adjacent numbers that only differ by one step. - - - The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article - at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to - .NET enabled code without using pointers and unsafe code. - - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. - - The first value. - The second value. - The maximum number of floating point values between the two values. Must be 1 or larger. - Thrown if is smaller than one. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The number of decimal places. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two lists of doubles and determines if they are equal within the - specified maximum error. - - The first value list. - The second value list. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two vectors and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two vectors and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal within the specified maximum error. - - The first value. - The second value. - The accuracy required for being almost equal. - - - - Compares two matrices and determines if they are equal to within the specified number - of decimal places or not, using the number of decimal places as an absolute measure. - - The first value. - The second value. - The number of decimal places. - - - - Compares two matrices and determines if they are equal to within the specified number of decimal places or not. - If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. - - The first value. - The second value. - The number of decimal places. - - - - Support Interface for Precision Operations (like AlmostEquals). - - Type of the implementing class. - - - - Returns a Norm of a value of this type, which is appropriate for measuring how - close this value is to zero. - - A norm of this value. - - - - Returns a Norm of the difference of two values of this type, which is - appropriate for measuring how close together these two values are. - - The value to compare with. - A norm of the difference between this and the other value. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - Revision - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - Frees the memory allocated to the MKL memory pool. - - - - - Frees the memory allocated to the MKL memory pool on the current thread. - - - - - Disable the MKL memory pool. May impact performance. - - - - - Retrieves information about the MKL memory pool. - - On output, returns the number of memory buffers allocated. - Returns the number of bytes allocated to all memory buffers. - - - - Enable gathering of peak memory statistics of the MKL memory pool. - - - - - Disable gathering of peak memory statistics of the MKL memory pool. - - - - - Measures peak memory usage of the MKL memory pool. - - Whether the usage counter should be reset. - The peak number of bytes allocated to all memory buffers. - - - - Disable gathering memory usage - - - - - Enable gathering memory usage - - - - - Return peak memory usage - - - - - Return peak memory usage and reset counter - - - - - Consistency vs. performance trade-off between runs on different machines. - - - - Consistent on the same CPU only (maximum performance) - - - Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) - - - Consistent on Intel CPUs supporting SSE2 or later - - - Consistent on Intel CPUs supporting SSE4.2 or later - - - Consistent on Intel CPUs supporting AVX or later - - - Consistent on Intel CPUs supporting AVX2 or later - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Helper class to load native libraries depending on the architecture of the OS and process. - - - - - Dictionary of handles to previously loaded libraries, - - - - - Gets a string indicating the architecture and bitness of the current process. - - - - - If the last native library failed to load then gets the corresponding exception - which occurred or null if the library was successfully loaded. - - - - - Load the native library with the given filename. - - The file name of the library to load. - Hint path where to look for the native binaries. Can be null. - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing its name and a directory. - Tries to load an implementation suitable for the current CPU architecture - and process mode if there is a matching subfolder. - - True if the library was successfully loaded or if it has already been loaded. - - - - Try to load a native library by providing the full path including the file name of the library. - - True if the library was successfully loaded or if it has already been loaded. - - - Revision - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - This method is safe to call, even if the provider is not loaded. - - - - - P/Invoke methods to the native math libraries. - - - - - Name of the native DLL. - - - - - Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsFFTProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsFFTProvider" environment variable, - or fall back to the best provider. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 - will cause k*k in the Bluestein sequence to overflow (GH-286). - - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Generate the bluestein sequence for the provided problem size. - - Number of samples. - Bluestein sequence exp(I*Pi*k^2/N) - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Convolution with the bluestein sequence (Parallel Version). - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Swap the real and imaginary parts of each sample. - - Sample Vector. - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Bluestein generic FFT for arbitrary sized sample vectors. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Fully rescale the FFT result. - - Sample Vector. - - - - Half rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Fully rescale the FFT result (e.g. for symmetric transforms). - - Sample Vector. - - - - Radix-2 Reorder Helper Method - - Sample type - Sample vector - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 Step Helper Method - - Sample vector. - Fourier series exponent sign. - Level Group Size. - Index inside of the level. - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sized sample vectors. - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - - Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). - - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - NVidia's CUDA Toolkit linear algebra provider. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - How to transpose a matrix. - - - - - Don't transpose a matrix. - - - - - Transpose a matrix. - - - - - Conjugate transpose a complex matrix. - - If a conjugate transpose is used with a real matrix, then the matrix is just transposed. - - - - Types of matrix norms. - - - - - The 1-norm. - - - - - The Frobenius norm. - - - - - The infinity norm. - - - - - The largest absolute value norm. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Interface to linear algebra algorithms that work off 1-D arrays. - - Supported data types are Double, Single, Complex, and Complex32. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiply elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the full QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by QR factor. This is only used for the managed provider and can be - null for the native provider. The native provider uses the Q portion stored in the R matrix. - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - On entry the B matrix; on exit the X matrix. - The number of columns of B. - On exit, the solution matrix. - Rows must be greater or equal to columns. - The type of QR factorization to perform. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Gets or sets the linear algebra provider. - Consider to use UseNativeMKL or UseManaged instead. - - The linear algebra provider. - - - - Optional path to try to load native provider binaries from. - If not set, Numerics will fall back to the environment variable - `MathNetNumericsLAProviderPath` or the default probing paths. - - - - - Try to use a native provider, if available. - - - - - Use the best provider available. - - - - - Use a specific provider if configured, e.g. using the - "MathNetNumericsLAProvider" environment variable, - or fall back to the best provider. - - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Cache-Oblivious Matrix Multiplication - - if set to true transpose matrix A. - if set to true transpose matrix B. - The value to scale the matrix A with. - The matrix A. - Row-shift of the left matrix - Column-shift of the left matrix - The matrix B. - Row-shift of the right matrix - Column-shift of the right matrix - The matrix C. - Row-shift of the result matrix - Column-shift of the result matrix - The number of rows of matrix op(A) and of the matrix C. - The number of columns of matrix op(B) and of the matrix C. - The number of columns of matrix op(A) and the rows of the matrix op(B). - The constant number of rows of matrix op(A) and of the matrix C. - The constant number of columns of matrix op(B) and of the matrix C. - The constant number of columns of matrix op(A) and the rows of the matrix op(B). - Indicates if this is the first recursion. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - The managed linear algebra provider. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - The B matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - The requested of the matrix. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. - - Source matrix to reduce - Output: Arrays for internal storage of real parts of eigenvalues - Output: Arrays for internal storage of imaginary parts of eigenvalues - Output: Arrays that contains further information about the transformations. - Order of initial matrix - This is derived from the Algol procedures HTRIDI by - Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Determines eigenvectors by undoing the symmetric tridiagonalize transformation - - Data array of matrix V (eigenvectors) - Previously tridiagonalized matrix by SymmetricTridiagonalize. - Contains further information about the transformations - Input matrix order - This is derived from the Algol procedures HTRIBK, by - by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of the eigenvectors - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Assumes that and have already been transposed. - - - - - Assumes that and have already been transposed. - - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Conjugates an array. Can be used to conjugate a vector and a matrix. - - The values to conjugate. - This result of the conjugation. - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows. - The number of columns. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Calculate Cholesky step - - Factor matrix - Number of rows - Column start - Total columns - Multipliers calculated previously - Number of available processors - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. Has to be different than . - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The column to solve for. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Perform calculation of Q or R - - Work array - Index of column in work array - Q or R matrices - The first row in - The last row - The first column - The last column - Number of available CPUs - - - - Generate column from initial matrix to work array - - Work array - Initial matrix - The number of rows in matrix - The first row - Column index - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s - associated with the Givens rotation that zeros the y-coordinate of the point. - - Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation - Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation - Contains the parameter c associated with the Givens rotation - Contains the parameter s associated with the Givens rotation - This is equivalent to the DROTG LAPACK routine. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Solves A*X=B for X using a previously SVD decomposed matrix. - - The number of rows in the A matrix. - The number of columns in the A matrix. - The s values returned by . - The left singular vectors returned by . - The right singular vectors returned by . - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Symmetric Householder reduction to tridiagonal form. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tred2 by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - Symmetric tridiagonal QL algorithm. - - Data array of matrix V (eigenvectors) - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedures tql2, by - Bowdler, Martin, Reinsch, and Wilkinson, Handbook for - Auto. Comp., Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Nonsymmetric reduction to Hessenberg form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Order of initial matrix - This is derived from the Algol procedures orthes and ortran, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutines in EISPACK. - - - - Nonsymmetric reduction from Hessenberg to real Schur form. - - Data array of matrix V (eigenvectors) - Array for internal storage of nonsymmetric Hessenberg form. - Arrays for internal storage of real parts of eigenvalues - Arrays for internal storage of imaginary parts of eigenvalues - Order of initial matrix - This is derived from the Algol procedure hqr2, - by Martin and Wilkinson, Handbook for Auto. Comp., - Vol.ii-Linear Algebra, and the corresponding - Fortran subroutine in EISPACK. - - - - - Complex scalar division X/Y. - - Real part of X - Imaginary part of X - Real part of Y - Imaginary part of Y - Division result as a number. - - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - Intel's Math Kernel Library (MKL) linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - Sets the desired bit consistency on repeated identical computations on varying CPU architectures, - as a trade-off with performance. - - VML optimal precision and rounding. - VML accuracy mode. - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If calling this method fails, consider to fall back to alternatives like the managed provider. - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Does a point wise add of two arrays z = x + y. This can be used - to add vectors or matrices. - - The array x. - The array y. - The result of the addition. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise subtraction of two arrays z = x - y. This can be used - to subtract vectors or matrices. - - The array x. - The array y. - The result of the subtraction. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise multiplication of two arrays z = x * y. This can be used - to multiple elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise multiplication. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise division of two arrays z = x / y. This can be used - to divide elements of vectors or matrices. - - The array x. - The array y. - The result of the point wise division. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Does a point wise power of two arrays z = x ^ y. This can be used - to raise elements of vectors or matrices to the powers of another vector or matrix. - - The array x. - The array y. - The result of the point wise power. - There is no equivalent BLAS routine, but many libraries - provide optimized (parallel and/or vectorized) versions of this - routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the MKL provider. - - - - - Unable to allocate memory. - - - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - OpenBLAS linear algebra provider. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - Hint path where to look for the native binaries - - - - Try to find out whether the provider is available, at least in principle. - Verification may still fail if available, but it will certainly fail if unavailable. - - - - - Initialize and verify that the provided is indeed available. - If not, fall back to alternatives like the managed provider - - - - - Frees memory buffers, caches and handles allocated in or to the provider. - Does not unload the provider itself, it is still usable afterwards. - - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0 and beta set to 0.0, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Computes the requested of the matrix. - - The type of norm to compute. - The number of rows in the matrix. - The number of columns in the matrix. - The matrix to compute the norm from. - - The requested of the matrix. - - - - - Computes the dot product of x and y. - - The vector x. - The vector y. - The dot product of x and y. - This is equivalent to the DOT BLAS routine. - - - - Adds a scaled vector to another: result = y + alpha*x. - - The vector to update. - The value to scale by. - The vector to add to . - The result of the addition. - This is similar to the AXPY BLAS routine. - - - - Scales an array. Can be used to scale a vector and a matrix. - - The scalar. - The values to scale. - This result of the scaling. - This is similar to the SCAL BLAS routine. - - - - Multiples two matrices. result = x * y - - The x matrix. - The number of rows in the x matrix. - The number of columns in the x matrix. - The y matrix. - The number of rows in the y matrix. - The number of columns in the y matrix. - Where to store the result of the multiplication. - This is a simplified version of the BLAS GEMM routine with alpha - set to 1.0f and beta set to 0.0f, and x and y are not transposed. - - - - Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c - - How to transpose the matrix. - How to transpose the matrix. - The value to scale matrix. - The a matrix. - The number of rows in the matrix. - The number of columns in the matrix. - The b matrix - The number of rows in the matrix. - The number of columns in the matrix. - The value to scale the matrix. - The c matrix. - - - - Computes the LUP factorization of A. P*A = L*U. - - An by matrix. The matrix is overwritten with the - the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f - for the L factor). The upper triangular factor U is stored on and above the diagonal of . - The order of the square matrix . - On exit, it contains the pivot indices. The size of the array must be . - This is equivalent to the GETRF LAPACK routine. - - - - Computes the inverse of matrix using LU factorization. - - The N by N matrix to invert. Contains the inverse On exit. - The order of the square matrix . - This is equivalent to the GETRF and GETRI LAPACK routines. - - - - Computes the inverse of a previously factored matrix. - - The LU factored N by N matrix. Contains the inverse On exit. - The order of the square matrix . - The pivot indices of . - This is equivalent to the GETRI LAPACK routine. - - - - Solves A*X=B for X using LU factorization. - - The number of columns of B. - The square matrix A. - The order of the square matrix . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRF and GETRS LAPACK routines. - - - - Solves A*X=B for X using a previously factored A matrix. - - The number of columns of B. - The factored A matrix. - The order of the square matrix . - The pivot indices of . - On entry the B matrix; on exit the X matrix. - This is equivalent to the GETRS LAPACK routine. - - - - Computes the Cholesky factorization of A. - - On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the - the Cholesky factorization. - The number of rows or columns in the matrix. - This is equivalent to the POTRF LAPACK routine. - - - - Solves A*X=B for X using Cholesky factorization. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRF add POTRS LAPACK routines. - - - - - Solves A*X=B for X using a previously factored A matrix. - - The square, positive definite matrix A. - The number of rows and columns in A. - On entry the B matrix; on exit the X matrix. - The number of columns in the B matrix. - This is equivalent to the POTRS LAPACK routine. - - - - Computes the QR factorization of A. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the R matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A M by M matrix that holds the Q matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Computes the thin QR factorization of A where M > N. - - On entry, it is the M by N A matrix to factor. On exit, - it is overwritten with the Q matrix of the QR factorization. - The number of rows in the A matrix. - The number of columns in the A matrix. - On exit, A N by N matrix that holds the R matrix of the - QR factorization. - A min(m,n) vector. On exit, contains additional information - to be used by the QR solve routine. - This is similar to the GEQRF and ORGQR LAPACK routines. - - - - Solves A*X=B for X using QR factorization of A. - - The A matrix. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using a previously QR factored matrix. - - The Q matrix obtained by calling . - The R matrix obtained by calling . - The number of rows in the A matrix. - The number of columns in the A matrix. - Contains additional information on Q. Only used for the native solver - and can be null for the managed provider. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - The type of QR factorization to perform. - Rows must be greater or equal to columns. - - - - Solves A*X=B for X using the singular value decomposition of A. - - On entry, the M by N matrix to decompose. - The number of rows in the A matrix. - The number of columns in the A matrix. - The B matrix. - The number of columns of B. - On exit, the solution matrix. - - - - Computes the singular value decomposition of A. - - Compute the singular U and VT vectors or not. - On entry, the M by N matrix to decompose. On exit, A may be overwritten. - The number of rows in the A matrix. - The number of columns in the A matrix. - The singular values of A in ascending value. - If is true, on exit U contains the left - singular vectors. - If is true, on exit VT contains the transposed - right singular vectors. - This is equivalent to the GESVD LAPACK routine. - - - - Computes the eigenvalues and eigenvectors of a matrix. - - Whether the matrix is symmetric or not. - The order of the matrix. - The matrix to decompose. The length of the array must be order * order. - On output, the matrix contains the eigen vectors. The length of the array must be order * order. - On output, the eigen values (λ) of matrix in ascending value. The length of the array must . - On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. - - - - Error codes return from the native OpenBLAS provider. - - - - - Unable to allocate memory. - - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - Uses and uses the value of - to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - Uses the value of to set whether the instance is thread safe. - - - - Construct a new random number generator with random seed. - - Uses - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The to use. - if set to true , the class is thread safe. - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Random number generator using Mersenne Twister 19937 algorithm. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Mersenne twister constant. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - Uses the value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - A 32-bit combined multiple recursive generator with 2 components of order 3. - - Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. - - - The type bases upon the implementation in the - Boost Random Number Library. - It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on - Wikipedia - Lagged Fibonacci generator. - - - - - Default value for the ShortLag - - - - - Default value for the LongLag - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The ShortLag value - TheLongLag value - - - - Gets the short lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Gets the long lag of the Lagged Fibonacci pseudo-random number generator. - - - - - Stores an array of random numbers - - - - - Stores an index for the random number array element that will be accessed next. - - - - - Fills the array with new unsigned random numbers. - - - Generated random numbers are 32-bit unsigned integers greater than or equal to 0 - and less than or equal to . - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - This class implements extension methods for the System.Random class. The extension methods generate - pseudo-random distributed numbers for types other than double and int32. - - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an array of uniform random bytes. - - The random number generator. - The size of the array to fill. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . - - The random number generator. - The array to fill with random values. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Fills an array with uniform random 32-bit signed integers within the specified range. - - The random number generator. - The array to fill with random values. - Lower bound, inclusive. - Upper bound, exclusive. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative random number less than . - - The random number generator. - - A 64-bit signed integer greater than or equal to 0, and less than ; that is, - the range of return values includes 0 but not . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int32 range. - - The random number generator. - - A 32-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random number of the full Int64 range. - - The random number generator. - - A 64-bit signed integer of the full range, including 0, negative numbers, - and . - - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a nonnegative decimal floating point random number less than 1.0. - - The random number generator. - - A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, - the range of return values includes 0.0 but not 1.0. - - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Returns a random boolean. - - The random number generator. - - This extension is thread-safe if and only if called on an random number - generator provided by Math.NET Numerics or derived from the RandomSource class. - - - - - Provides a time-dependent seed value, matching the default behavior of System.Random. - WARNING: There is no randomness in this seed and quick repeated calls can cause - the same seed value. Do not use for cryptography! - - - - - Provides a seed based on time and unique GUIDs. - WARNING: There is only low randomness in this seed, but at least quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. - WARNING: There is only medium randomness in this seed, but quick repeated - calls will result in different seed values. Do not use for cryptography! - - - - - Base class for random number generators. This class introduces a layer between - and the Math.Net Numerics random number generators to provide thread safety. - When used directly it use the System.Random as random number source. - - - - - Initializes a new instance of the class using - the value of to set whether - the instance is thread safe or not. - - - - - Initializes a new instance of the class. - - if set to true , the class is thread safe. - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The array to fill with random values. - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - The size of the array to fill. - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than . - - - - - Returns a random number less then a specified maximum. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - A 32-bit signed integer less than . - is zero or negative. - - - - Returns a random number within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. - - is greater than . - - - - Fills an array with random 32-bit signed integers greater than or equal to zero and less than . - - The array to fill with random values. - - - - Returns an array with random 32-bit signed integers greater than or equal to zero and less than . - - The size of the array to fill. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. - - - - Fills an array with random numbers within a specified range. - - The array to fill with random values. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an array with random 32-bit signed integers within the specified range. - - The size of the array to fill. - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . - - - - - Returns an infinite sequence of random numbers within a specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. - - - - Fills the elements of a specified array of bytes with random numbers. - - An array of bytes to contain random numbers. - is null. - - - - Returns a random number between 0.0 and 1.0. - - A double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - A random number generator based on the class in the .NET library. - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Default instance, thread-safe. - - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Returns a random 32-bit signed integer within the specified range. - - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). - - - - Returns a random 32-bit signed integer within the specified range. - - The inclusive lower bound of the random number returned. - The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. - WARNING: potentially very short random sequence length, can generate repeated partial sequences. - - Parallelized on large length, but also supports being called in parallel from multiple threads - - - - Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 1982 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: - An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Wichmann-Hill’s 2006 combined multiplicative congruential generator. - - See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". - Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - - - - Initializes a new instance of the class. - - The seed value. - The seed is set to 1, if the zero is used as the seed. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. - Xn = a * Xn−3 + c mod 2^32 - http://www.jstatsoft.org/v08/i14/paper - - - - - The default value for X1. - - - - - The default value for X2. - - - - - The default value for the multiplier. - - - - - The default value for the carry over. - - - - - The multiplier to compute a double-precision floating point number [0, 1) - - - - - Seed or last but three unsigned random number. - - - - - Last but two unsigned random number. - - - - - Last but one unsigned random number. - - - - - The value of the carry over. - - - - - The multiplier. - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Note: must be less than . - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class using - a seed based on time and unique GUIDs. - - if set to true , the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - If the seed value is zero, it is set to one. Uses the - value of to - set whether the instance is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - - Uses the default values of: - - a = 916905990 - c = 13579 - X1 = 77465321 - X2 = 362436069 - - - - - Initializes a new instance of the class. - - The seed value. - if set to true, the class is thread safe. - The multiply value - The initial carry value. - The initial value if X1. - The initial value if X2. - must be less than . - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Xoshiro256** pseudo random number generator. - A random number generator based on the class in the .NET library. - - - This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has - excellent(sub-ns) speed, a state space(256 bits) that is large enough - for any parallel application, and it passes all tests we are aware of. - - For generating just floating-point numbers, xoshiro256+ is even faster. - - The state must be seeded so that it is not everywhere zero.If you have - a 64-bit seed, we suggest to seed a splitmix64 generator and use its - output to fill s. - - For further details see: - David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". - https://arxiv.org/abs/1805.01407 - - - - - Construct a new random number generator with a random seed. - - - - - Construct a new random number generator with random seed. - - if set to true , the class is thread safe. - - - - Construct a new random number generator with random seed. - - The seed value. - - - - Construct a new random number generator with random seed. - - The seed value. - if set to true , the class is thread safe. - - - - Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. - - - - - Returns a random 32-bit signed integer greater than or equal to zero and less than - - - - - Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). - - - - - Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 32 (not verified). - - - - - Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. - N (bit count) is expected to be greater than zero and less than 64 (not verified). - - - - - Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads. - - - - Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. - - Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. - - - - Splitmix64 RNG. - - RNG state. This can take any value, including zero. - A new random UInt64. - - Splitmix64 produces equidistributed outputs, thus if a zero is generated then the - next zero will be after a further 2^64 outputs. - - - - - Bisection root-finding algorithm. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Algorithm by Brent, Van Wijngaarden, Dekker et al. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. - Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Factor at which to expand the bounds, if needed. Default 1.6. - Maximum number of expand iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Helper method useful for preventing rounding errors. - a*sign(b) - - - - Algorithm by Broyden. - Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - Relative step size for calculating the Jacobian matrix at first step. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - Find a solution of the equation f(x)=0. - The function to find roots from. - Initial guess of the root. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. - Maximum number of iterations. Usually 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Helper method to calculate an approximation of the Jacobian. - - The function. - The argument (initial guess). - The result (of initial guess). - Relative step size for calculating the Jacobian. - - - - Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 - Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html - - - - - Q and R are transformed variables. - - - - - n^(1/3) - work around a negative double raised to (1/3) - - - - - Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. - Note the special coefficient order ascending by exponent (consistent with polynomials). - - - - - Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - Initial guess of the root. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first derivative of the function to find roots from. - The low value of the range where the root is supposed to be. - The high value of the range where the root is supposed to be. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false. - - - - Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. - The algorithm aborts immediately if the root leaves the bound interval. - - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. - The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. - Maximum number of iterations. Default 100. - Returns the root with the specified accuracy. - - - - Find a solution of the equation f(x)=0. - The function to find roots from. - The first guess of the root within the bounds specified. - The second guess of the root within the bounds specified. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - The low value of the range where the root is supposed to be. Aborts if it leaves the interval. - Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. - Maximum number of iterations. Example: 100. - The root that was found, if any. Undefined if the function returns false. - True if a root with the specified accuracy was found, else false - - - Detect a range containing at least one root. - The function to detect roots from. - Lower value of the range. - Upper value of the range - The growing factor of research. Usually 1.6. - Maximum number of iterations. Usually 50. - True if the bracketing operation succeeded, false otherwise. - This iterative methods stops when two values with opposite signs are found. - - - - Sorting algorithms for single, tuple and triple lists. - - - - - Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. - - The type of elements in the key list. - List to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - Comparison, defining the sort order. - - - - Sort a range of a list of keys, in place using the quick sort algorithm. - - The type of element in the list. - List to sort. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the item list. - List to sort. - List to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the key list. - The type of elements in the first item list. - The type of elements in the second item list. - List to sort. - First list to permute the same way as the key list. - Second list to permute the same way as the key list. - The zero-based starting index of the range to sort. - The length of the range to sort. - Comparison, defining the sort order. - - - - Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. - - The type of elements in the primary list. - The type of elements in the secondary list. - List to sort. - List to sort on duplicate primary items, and permute the same way as the key list. - Comparison, defining the primary sort order. - Comparison, defining the secondary sort order. - - - - Recursive implementation for an in place quick sort on a list. - - The type of the list on which the quick sort is performed. - The list which is sorted using quick sort. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. - - The type of the list on which the quick sort is performed. - The type of the list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. - - The type of the list on which the quick sort is performed. - The type of the first list which is automatically reordered accordingly. - The type of the second list which is automatically reordered accordingly. - The list which is sorted using quick sort. - The first list which is automatically reordered accordingly. - The second list which is automatically reordered accordingly. - The method with which to compare two elements of the quick sort. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. - - The type of the primary list. - The type of the secondary list. - The list which is sorted using quick sort. - The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. - The method with which to compare two elements of the primary list. - The method with which to compare two elements of the secondary list. - The left boundary of the quick sort. - The right boundary of the quick sort. - - - - Performs an in place swap of two elements in a list. - - The type of elements stored in the list. - The list in which the elements are stored. - The index of the first element of the swap. - The index of the second element of the swap. - - - - This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. - - - This partial implementation of the SpecialFunctions class contains all methods related to the error function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. - - - This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. - - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the Airy function Ai. - AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Ai. - - - - Returns the exponentially scaled Airy function Ai. - ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of Airy function Ai - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of Airy function Ai. - - - - Returns the derivative of the Airy function Ai. - AiryAiPrime(z) is defined as d/dz AiryAi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Ai. - - - - Returns the exponentially scaled derivative of the Airy function Ai. - ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Ai. - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi(z). - - - - Returns the Airy function Bi. - AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. - - The value to compute the Airy function of. - The Airy function Bi. - - - - Returns the exponentially scaled Airy function Bi. - ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the Airy function of. - The exponentially scaled Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the derivative of the Airy function Bi. - AiryBiPrime(z) is defined as d/dz AiryBi(z). - - The value to compute the derivative of the Airy function of. - The derivative of the Airy function Bi. - - - - Returns the exponentially scaled derivative of the Airy function Bi. - ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). - - The value to compute the derivative of the Airy function of. - The exponentially scaled derivative of the Airy function Bi. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the first kind. - BesselJ(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the first kind. - - - - Returns the exponentially scaled Bessel function of the first kind. - ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the first kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the Bessel function of the second kind. - BesselY(n, z) is a solution to the Bessel differential equation. - - The order of the Bessel function. - The value to compute the Bessel function of. - The Bessel function of the second kind. - - - - Returns the exponentially scaled Bessel function of the second kind. - ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). - - The order of the Bessel function. - The value to compute the Bessel function of. - The exponentially scaled Bessel function of the second kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the first kind. - BesselI(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the first kind. - - - - Returns the exponentially scaled modified Bessel function of the first kind. - ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the first kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Returns the modified Bessel function of the second kind. - BesselK(n, z) is a solution to the modified Bessel differential equation. - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The modified Bessel function of the second kind. - - - - Returns the exponentially scaled modified Bessel function of the second kind. - ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). - - The order of the modified Bessel function. - The value to compute the modified Bessel function of. - The exponentially scaled modified Bessel function of the second kind. - - - - Computes the logarithm of the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The logarithm of the Euler Beta function evaluated at z,w. - If or are not positive. - - - - Computes the Euler Beta function. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The Euler Beta function evaluated at z,w. - If or are not positive. - - - - Returns the lower incomplete (unregularized) beta function - B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The lower incomplete (unregularized) beta function. - - - - Returns the regularized lower incomplete beta function - I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. - - The first Beta parameter, a positive real number. - The second Beta parameter, a positive real number. - The upper limit of the integral. - The regularized lower incomplete beta function. - - - - ************************************** - COEFFICIENTS FOR METHOD ErfImp * - ************************************** - - Polynomial coefficients for a numerator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a denominator of ErfImp - calculation for Erf(x) in the interval [1e-10, 0.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [0.75, 1.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [1.25, 2.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [2.25, 3.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [3.5, 5.25]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [5.25, 8]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [8, 11.5]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [11.5, 17]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [17, 24]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [24, 38]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [38, 60]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [60, 85]. - - - - Polynomial coefficients for a numerator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - Polynomial coefficients for a denominator in ErfImp - calculation for Erfc(x) in the interval [85, 110]. - - - - - ************************************** - COEFFICIENTS FOR METHOD ErfInvImp * - ************************************** - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0, 0.5]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.5, 0.75]. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. - - - - Polynomial coefficients for a numerator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Polynomial coefficients for a denominator of ErfInvImp - calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. - - - - Calculates the error function. - The value to evaluate. - the error function evaluated at given value. - - - returns 1 if x == double.PositiveInfinity. - returns -1 if x == double.NegativeInfinity. - - - - - Calculates the complementary error function. - The value to evaluate. - the complementary error function evaluated at given value. - - - returns 0 if x == double.PositiveInfinity. - returns 2 if x == double.NegativeInfinity. - - - - - Calculates the inverse error function evaluated at z. - The inverse error function evaluated at given value. - - - returns double.PositiveInfinity if z >= 1.0. - returns double.NegativeInfinity if z <= -1.0. - - - Calculates the inverse error function evaluated at z. - value to evaluate. - the inverse error function evaluated at Z. - - - - Implementation of the error function. - - Where to evaluate the error function. - Whether to compute 1 - the error function. - the error function. - - - Calculates the complementary inverse error function evaluated at z. - The complementary inverse error function evaluated at given value. - We have tested this implementation against the arbitrary precision mpmath library - and found cases where we can only guarantee 9 significant figures correct. - - returns double.PositiveInfinity if z <= 0.0. - returns double.NegativeInfinity if z >= 2.0. - - - calculates the complementary inverse error function evaluated at z. - value to evaluate. - the complementary inverse error function evaluated at Z. - - - - The implementation of the inverse error function. - - First intermediate parameter. - Second intermediate parameter. - Third intermediate parameter. - the inverse error function. - - - - Computes the generalized Exponential Integral function (En). - - The argument of the Exponential Integral function. - Integer power of the denominator term. Generalization index. - The value of the Exponential Integral function. - - This implementation of the computation of the Exponential Integral function follows the derivation in - "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by - Dover Publications, New York), Chapters 6, 7, and 26. - AND - "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 - - - for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. - for 0 < x <= 1 uses Taylor series expansion - - Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. - - - - - Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up - to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. - - A value value! for value > 0 - - If you need to multiply or divide various such factorials, consider using the logarithmic version - instead so you can add instead of multiply and subtract instead of divide, and - then exponentiate the result using . This will also circumvent the problem that - factorials become very large even for small parameters. - - - - - - Computes the factorial of an integer. - - - - - Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. - - A value value! for value > 0 - - - - Computes the binomial coefficient: n choose k. - - A nonnegative value n. - A nonnegative value h. - The binomial coefficient: n choose k. - - - - Computes the natural logarithm of the binomial coefficient: ln(n choose k). - - A nonnegative value n. - A nonnegative value h. - The logarithmic binomial coefficient: ln(n choose k). - - - - Computes the multinomial coefficient: n choose n1, n2, n3, ... - - A nonnegative value n. - An array of nonnegative values that sum to . - The multinomial coefficient. - if is . - If or any of the are negative. - If the sum of all is not equal to . - - - - The order of the approximation. - - - - - Auxiliary variable when evaluating the function. - - - - - Polynomial coefficients for the approximation. - - - - - Computes the logarithm of the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. - - - - - Computes the Gamma function. - - The argument of the gamma function. - The logarithm of the gamma function. - - - This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in - "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. - We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy - should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). - - Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. - - - - - Returns the upper incomplete regularized gamma function - Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete regularized gamma function. - - - - Returns the upper incomplete gamma function - Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The lower integral limit. - The upper incomplete gamma function. - - - - Returns the lower incomplete gamma function - gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the lower incomplete regularized gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. - - The argument for the gamma function. - The upper integral limit. - The lower incomplete gamma function. - - - - Returns the inverse P^(-1) of the regularized lower incomplete gamma function - P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, - such that P^(-1)(a,P(a,x)) == x. - - - - - Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. - This implementation is based on - Jose Bernardo - Algorithm AS 103: - Psi ( Digamma ) Function, - Applied Statistics, - Volume 25, Number 3, 1976, pages 315-317. - Using the modifications as in Tom Minka's lightspeed toolbox. - - The argument of the digamma function. - The value of the DiGamma function at . - - - - Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will - only return solutions that are positive. - This implementation is based on the bisection method. - - The argument of the inverse digamma function. - The positive solution to the inverse DiGamma function at . - - - - Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Rising Factorial for x and n - - - - Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials - - The real value of the Falling Factorial for x and n - - - - A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. - This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation - see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function - - The list of coefficients in the numerator - The list of coefficients in the denominator - The variable in the power series - The value of the Generalized HyperGeometric Function. - - - - Returns the Hankel function of the first kind. - HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the first kind. - - - - Returns the exponentially scaled Hankel function of the first kind. - ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the first kind. - - - - Returns the Hankel function of the second kind. - HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). - - The order of the Hankel function. - The value to compute the Hankel function of. - The Hankel function of the second kind. - - - - Returns the exponentially scaled Hankel function of the second kind. - ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). - - The order of the Hankel function. - The value to compute the Hankel function of. - The exponentially scaled Hankel function of the second kind. - - - - Computes the 'th Harmonic number. - - The Harmonic number which needs to be computed. - The t'th Harmonic number. - - - - Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) - - The order parameter. - The power parameter. - General Harmonic number. - - - - Returns the Kelvin function of the first kind. - KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function of the first kind. - - - - Returns the Kelvin function ber. - KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function ber. - KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBer(x) is equivalent to KelvinBer(0, x). - - The value to compute the Kelvin function of. - The Kelvin function ber. - - - - Returns the Kelvin function bei. - KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the Kelvin function bei. - KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). - KelvinBei(x) is equivalent to KelvinBei(0, x). - - The value to compute the Kelvin function of. - The Kelvin function bei. - - - - Returns the derivative of the Kelvin function ber. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function ber - - - - Returns the derivative of the Kelvin function ber. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ber. - - - - Returns the derivative of the Kelvin function bei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - the derivative of the Kelvin function bei. - - - - Returns the derivative of the Kelvin function bei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function bei. - - - - Returns the Kelvin function of the second kind - KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). - KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) - - The order of the Kelvin function. - The value to calculate the kelvin function of, - - - - - Returns the Kelvin function ker. - KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function ker. - KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKer(x) is equivalent to KelvinKer(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function ker. - - - - Returns the Kelvin function kei. - KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). - - the order of the the Kelvin function. - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the Kelvin function kei. - KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). - KelvinKei(x) is equivalent to KelvinKei(0, x). - - The non-negative real value to compute the Kelvin function of. - The Kelvin function kei. - - - - Returns the derivative of the Kelvin function ker. - - The order of the Kelvin function. - The non-negative real value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function ker. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function ker. - - - - Returns the derivative of the Kelvin function kei. - - The order of the Kelvin function. - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Returns the derivative of the Kelvin function kei. - - The value to compute the derivative of the Kelvin function of. - The derivative of the Kelvin function kei. - - - - Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic - - The parameter for which to compute the logistic function. - The logistic function of . - - - - Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit - - The parameter for which to compute the logit function. This number should be - between 0 and 1. - The logarithm of divided by 1.0 - . - - - - ************************************** - COEFFICIENTS FOR METHODS bessi0 * - ************************************** - - Chebyshev coefficients for exp(-x) I0(x) - in the interval [0, 8]. - - lim(x->0){ exp(-x) I0(x) } = 1. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I0(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessi1 * - ************************************** - - Chebyshev coefficients for exp(-x) I1(x) / x - in the interval [0, 8]. - - lim(x->0){ exp(-x) I1(x) / x } = 1/2. - - - - Chebyshev coefficients for exp(-x) sqrt(x) I1(x) - in the inverted interval [8, infinity]. - - lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk0, bessk0e * - ************************************** - - Chebyshev coefficients for K0(x) + log(x/2) I0(x) - in the interval [0, 2]. The odd order coefficients are all - zero; only the even order coefficients are listed. - - lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. - - - - Chebyshev coefficients for exp(x) sqrt(x) K0(x) - in the inverted interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). - - - - - ************************************** - COEFFICIENTS FOR METHODS bessk1, bessk1e * - ************************************** - - Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) - in the interval [0, 2]. - - lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. - - - - Chebyshev coefficients for exp(x) sqrt(x) K1(x) - in the interval [2, infinity]. - - lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). - - - - Returns the modified Bessel function of first kind, order 0 of the argument. -

- The function is defined as i0(x) = j0( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of first kind, - order 1 of the argument. -

- The function is defined as i1(x) = -i j1( ix ). -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the modified Bessel function of the second kind - of order 0 of the argument. -

- The range is partitioned into the two intervals [0, 8] and - (8, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 0 of the argument. - - The value to compute the Bessel function of. - - - - Returns the modified Bessel function of the second kind - of order 1 of the argument. -

- The range is partitioned into the two intervals [0, 2] and - (2, infinity). Chebyshev polynomial expansions are employed - in each interval. -

- The value to compute the Bessel function of. - -
- - Returns the exponentially scaled modified Bessel function - of the second kind of order 1 of the argument. -

- k1e(x) = exp(x) * k1(x). -

- The value to compute the Bessel function of. - -
- - - Returns the modified Struve function of order 0. - - The value to compute the function of. - - - - Returns the modified Struve function of order 1. - - The value to compute the function of. - - - - Returns the difference between the Bessel I0 and Struve L0 functions. - - The value to compute the function of. - - - - Returns the difference between the Bessel I1 and Struve L1 functions. - - The value to compute the function of. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the first kind. - SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the first kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Returns the spherical Bessel function of the second kind. - SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). - - The order of the spherical Bessel function. - The value to compute the spherical Bessel function of. - The spherical Bessel function of the second kind. - - - - Numerically stable exponential minus one, i.e. x -> exp(x)-1 - - A number specifying a power. - Returns exp(power)-1. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) - - The length of side a of the triangle. - The length of side b of the triangle. - Returns sqrt(a2 + b2) without underflow/overflow. - - - - Evaluation functions, useful for function approximation. - - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Evaluate a polynomial at point x. - Coefficients are ordered by power with power k at index k. - Example: coefficients [3,-1,2] represent y=2x^2-x+3. - - The location where to evaluate the polynomial at. - The coefficients of the polynomial, coefficient for power k at index k. - - - - Numerically stable series summation - - provides the summands sequentially - Sum - - - Evaluates the series of Chebyshev polynomials Ti at argument x/2. - The series is given by -
-                  N-1
-                   - '
-            y  =   >   coef[i] T (x/2)
-                   -            i
-                  i=0
-            
- Coefficients are stored in reverse order, i.e. the zero - order term is last in the array. Note N is the number of - coefficients, not the order. -

- If coefficients are for the interval a to b, x must - have been transformed to x -> 2(2x - b - a)/(b-a) before - entering the routine. This maps x from (a, b) to (-1, 1), - over which the Chebyshev polynomials are defined. -

- If the coefficients are for the inverted interval, in - which (a, b) is mapped to (1/b, 1/a), the transformation - required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, - this becomes x -> 4a/x - 1. -

- SPEED: -

- Taking advantage of the recurrence properties of the - Chebyshev polynomials, the routine requires one more - addition per loop than evaluating a nested polynomial of - the same degree. -

- The coefficients of the polynomial. - Argument to the polynomial. - - Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs -

- Marked as Deprecated in - http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html - - - -

- Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. - - The no. of terms in the sequence. - The coefficients of the Chebyshev series, length n+1. - The value at which the series is to be evaluated. - - ORIGINAL AUTHOR: - Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND - REFERENCES: - "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" - J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 - -
- - - Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. - This function has a global minimum at (1,1) with f(1,1) = 0. - Common range: [-5,10] or [-2.048,2.048]. - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Valley-shaped Rosenbrock function for 2 or more dimensions. - This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). - - - https://en.wikipedia.org/wiki/Rosenbrock_function - http://www.sfu.ca/~ssurjano/rosen.html - - - - - Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 - This function has 4 global minima with f(x,y) = 0. - Common range: [-6,6]. - Named after David Mautner Himmelblau - - - https://en.wikipedia.org/wiki/Himmelblau%27s_function - - - - - Rastrigin, a highly multi-modal function with many local minima. - Global minimum of all zeros with f(0) = 0. - Common range: [-5.12,5.12]. - - - https://en.wikipedia.org/wiki/Rastrigin_function - http://www.sfu.ca/~ssurjano/rastr.html - - - - - Drop-Wave, a multi-modal and highly complex function with many local minima. - Global minimum of all zeros with f(0) = -1. - Common range: [-5.12,5.12]. - - - http://www.sfu.ca/~ssurjano/drop.html - - - - - Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. - Global minimum of all zeros with f(0) = 0. - Common range: [-32.768, 32.768]. - - - http://www.sfu.ca/~ssurjano/ackley.html - - - - - Bowl-shaped first Bohachevsky function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-100, 100] - - - http://www.sfu.ca/~ssurjano/boha.html - - - - - Plate-shaped Matyas function. - Global minimum of all zeros with f(0,0) = 0. - Common range: [-10, 10]. - - - http://www.sfu.ca/~ssurjano/matya.html - - - - - Valley-shaped six-hump camel back function. - Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). - Common range: x in [-3,3], y in [-2,2]. - - - http://www.sfu.ca/~ssurjano/camel6.html - - - - - Statistics operating on arrays assumed to be unsorted. - WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. - - - - - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the smallest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the largest absolute value from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the geometric mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the harmonic mean of the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as unsorted array. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample array, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample arrays. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample array. - Second sample array. - - - - Evaluates the population covariance from the full population provided as two arrays. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population array. - Second population array. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. - Returns NaN if data is empty or any entry is NaN. - - Sample array, no sorting is assumed. - - - - Returns the order statistic (order 1..N) from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the unsorted data array. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the p-Percentile value from the unsorted data array. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the third quartile value from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the inter-quartile range from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the unsorted data array. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - Sample array, no sorting is assumed. Will be reordered. - Quantile selector, between 0.0 and 1.0 (inclusive) - Quantile definition, to choose what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the unsorted data array. - The rank definition can be specified to be compatible - with an existing system. - WARNING: Works inplace and can thus causes the data array to be reordered. - - - - - A class with correlation measures between two datasets. - - - - - Auto-correlation function (ACF) based on FFT for all possible lags k. - - Data array to calculate auto correlation for. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. - - The data array to calculate auto correlation for. - Max lag to calculate ACF for must be positive and smaller than x.Length. - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. - An array with the ACF as a function of the lags k. - - - - Auto-correlation function based on FFT for lags k. - - The data array to calculate auto correlation for. - Array with lags to calculate ACF for. - An array with the ACF as a function of the lags k. - - - - The internal method for calculating the auto-correlation. - - The data array to calculate auto-correlation for - Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length - Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length - An array with the ACF as a function of the lags k. - - - - Computes the Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - The Pearson product-moment correlation coefficient. - - - - Computes the Weighted Pearson Product-Moment Correlation coefficient. - - Sample data A. - Sample data B. - Corresponding weights of data. - The Weighted Pearson product-moment correlation coefficient. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Array of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Pearson Product-Moment Correlation matrix. - - Enumerable of sample data vectors. - The Pearson product-moment correlation matrix. - - - - Computes the Spearman Ranked Correlation coefficient. - - Sample data series A. - Sample data series B. - The Spearman ranked correlation coefficient. - - - - Computes the Spearman Ranked Correlation matrix. - - Array of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the Spearman Ranked Correlation matrix. - - Enumerable of sample data vectors. - The Spearman ranked correlation matrix. - - - - Computes the basic statistics of data set. The class meets the - NIST standard of accuracy for mean, variance, and standard deviation - (the only statistics they provide exact values for) and exceeds them - in increased accuracy mode. - Recommendation: consider to use RunningStatistics instead. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Initializes a new instance of the class. - - The sample data. - - If set to true, increased accuracy mode used. - Increased accuracy mode uses types for internal calculations. - - - Don't use increased accuracy for data sets containing large values (in absolute value). - This may cause the calculations to overflow. - - - - - Gets the size of the sample. - - The size of the sample. - - - - Gets the sample mean. - - The sample mean. - - - - Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). - - The sample variance. - - - - Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). - - The sample standard deviation. - - - - Gets the sample skewness. - - The sample skewness. - Returns zero if is less than three. - - - - Gets the sample kurtosis. - - The sample kurtosis. - Returns zero if is less than four. - - - - Gets the maximum sample value. - - The maximum sample value. - - - - Gets the minimum sample value. - - The minimum sample value. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of data values. - - A sequence of datapoints. - - - - Computes descriptive statistics from a stream of nullable data values. - - A sequence of datapoints. - - - - Internal use. Method use for setting the statistics. - - For setting Mean. - For setting Variance. - For setting Skewness. - For setting Kurtosis. - For setting Minimum. - For setting Maximum. - For setting Count. - - - - A consists of a series of s, - each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - This IComparer performs comparisons between a point and a bucket. - - - - - Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. - - The first bucket to compare. - The second bucket to compare. - -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. - - - - Lower Bound of the Bucket. - - - - - Upper Bound of the Bucket. - - - - - The number of datapoints in the bucket. - - - Value may be NaN if this was constructed as a argument. - - - - - Initializes a new instance of the Bucket class. - - - - - Constructs a Bucket that can be used as an argument for a - like when performing a Binary search. - - Value to look for - - - - Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. - - A cloned Bucket object. - - - - Width of the Bucket. - - - - - True if this is a single point argument for - when performing a Binary search. - - - - - Default comparer. - - - - - This method check whether a point is contained within this bucket. - - The point to check. - - 0 if the point falls within the bucket boundaries; - -1 if the point is smaller than the bucket, - +1 if the point is larger than the bucket. - - - - Comparison of two disjoint buckets. The buckets cannot be overlapping. - - - 0 if UpperBound and LowerBound are bit-for-bit equal - 1 if This bucket is lower that the compared bucket - -1 otherwise - - - - - Checks whether two Buckets are equal. - - - UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a - difference in Count given by . - - - - - Provides a hash code for this bucket. - - - - - Formats a human-readable string for this bucket. - - - - - A class which computes histograms of data. - - - - - Contains all the Buckets of the Histogram. - - - - - Indicates whether the elements of buckets are currently sorted. - - - - - Initializes a new instance of the Histogram class. - - - - - Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram - will be set to the smallest and largest datapoint. - - The data sequence to build a histogram on. - The number of buckets to use. - - - - Constructs a Histogram with a specific number of equally sized buckets. - - The data sequence to build a histogram on. - The number of buckets to use. - The histogram lower bound. - The histogram upper bound. - - - - Add one data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The datapoint which we want to add. - - - - Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, - the lowerbound or upperbound will automatically adapt. - - The sequence of datapoints which we want to add. - - - - Adds a Bucket to the Histogram. - - - - - Sort the buckets if needed. - - - - - Returns the Bucket that contains the value v. - - The point to search the bucket for. - A copy of the bucket containing point . - - - - Returns the index in the Histogram of the Bucket - that contains the value v. - - The point to search the bucket index for. - The index of the bucket containing the point. - - - - Returns the lower bound of the histogram. - - - - - Returns the upper bound of the histogram. - - - - - Gets the n'th bucket. - - The index of the bucket to be returned. - A copy of the n'th bucket. - - - - Gets the number of buckets. - - - - - Gets the total number of datapoints in the histogram. - - - - - Prints the buckets contained in the . - - - - - Kernel density estimation (KDE). - - - - - Estimate the probability density function of a random variable. - - - The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. - - - - - Estimate the probability density function of a random variable with a Gaussian kernel. - - - - - Estimate the probability density function of a random variable with an Epanechnikov kernel. - The Epanechnikov kernel is optimal in a mean square error sense. - - - - - Estimate the probability density function of a random variable with a uniform kernel. - - - - - Estimate the probability density function of a random variable with a triangular kernel. - - - - - A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). - This kernel is the default. - - - - - Epanechnikov Kernel: - x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 - - - - - Uniform Kernel: - x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 - - - - - Triangular Kernel: - x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 - - - - - A hybrid Monte Carlo sampler for multivariate distributions. - - - - - Number of parameters in the density function. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of different components of the - momentum. - - - - - Gets or sets the standard deviations used in the sampling of different components of the - momentum. - - When the length of pSdv is not the same as Length. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - 1 using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the a random number generator provided by the user. - A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The components of the momentum will be sampled from a normal distribution with standard deviations - given by pSdv. This constructor will set the burn interval, the method used for - numerical differentiation and the random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviations of the normal distributions that are used to sample - the components of the momentum. - Random number generator used for sampling the momentum. - The method used for numerical differentiation. - When the number of burnInterval iteration is negative. - When the length of pSdv is not the same as x0. - - - - Initialize parameters. - - The current location of the sampler. - - - - Checking that the location and the momentum are of the same dimension and that each component is positive. - - The standard deviations used for sampling the momentum. - When the length of pSdv is not the same as Length or if any - component is negative. - When pSdv is null. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the gradient. Uses a simple three point estimation. - - Function which the gradient is to be evaluated. - The location where the gradient is to be evaluated. - The gradient of the function at the point x. - - - - The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set - of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as - a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used - to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler - (). - - The type of samples this sampler produces. - - - - The delegate type that defines a derivative evaluated at a certain point. - - Function to be differentiated. - Value where the derivative is computed. - - - - Evaluates the energy function of the target distribution. - - - - - The current location of the sampler. - - - - - The number of burn iterations between two samples. - - - - - The size of each step in the Hamiltonian equation. - - - - - The number of iterations in the Hamiltonian equation. - - - - - The algorithm used for differentiation. - - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the number of iterations in the Hamiltonian equation. - - When frog leap steps is negative or zero. - - - - Gets or sets the size of each step in the Hamiltonian equation. - - When step size is negative or zero. - - - - Constructs a new Hybrid Monte Carlo sampler. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - Random number generator used for sampling the momentum. - The method used for differentiation. - When the number of burnInterval iteration is negative. - When either x0, pdfLnP or diff is null. - - - - Returns a sample from the distribution P. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Method used to update the sample location. Used in the end of the loop. - - The old energy. - The old gradient/derivative of the energy. - The new sample. - The new gradient/derivative of the energy. - The new energy. - The difference between the old Hamiltonian and new Hamiltonian. Use to determine - if an update should take place. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Method for doing dot product. - - First vector/scalar in the product. - Second vector/scalar in the product. - - - - Method for adding, multiply the second vector/scalar by factor and then - add it to the first vector/scalar. - - First vector/scalar. - Scalar factor multiplying by the second vector/scalar. - Second vector/scalar. - - - - Multiplying the second vector/scalar by factor and then subtract it from - the first vector/scalar. - - First vector/scalar. - Scalar factor to be multiplied to the second vector/scalar. - Second vector/scalar. - - - - Method for sampling a random momentum. - - Momentum to be randomized. - - - - The Hamiltonian equations that is used to produce the new sample. - - - - - Method to compute the Hamiltonian used in the method. - - The momentum. - The energy. - Hamiltonian=E+p.p/2 - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than or equal to zero. - Throws when value is negative. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than to zero. - Throws when value is negative or zero. - - - - Method to check and set a quantity to a non-negative value. - - Proposed value to be checked. - Returns value if it is greater than zero. - Throws when value is negative or zero. - - - - Provides utilities to analysis the convergence of a set of samples from - a . - - - - - Computes the auto correlations of a series evaluated by a function f. - - The series for computing the auto correlation. - The lag in the series - The function used to evaluate the series. - The auto correlation. - Throws if lag is zero or if lag is - greater than or equal to the length of Series. - - - - Computes the effective size of the sample when evaluated by a function f. - - The samples. - The function use for evaluating the series. - The effective size when auto correlation is taken into account. - - - - A method which samples datapoints from a proposal distribution. The implementation of this sampler - is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it doesn't take any parameters; it samples random - variables from the whole domain. - - The type of the datapoints. - A sample from the proposal distribution. - - - - A method which samples datapoints from a proposal distribution given an initial sample. The implementation - of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from - in that it samples locally around an initial point. In other words, it - makes a small local move rather than producing a global sample from the proposal. - - The type of the datapoints. - The initial sample. - A sample from the proposal distribution. - - - - A function which evaluates a density. - - The type of data the distribution is over. - The sample we want to evaluate the density for. - - - - A function which evaluates a log density. - - The type of data the distribution is over. - The sample we want to evaluate the log density for. - - - - A function which evaluates the log of a transition kernel probability. - - The type for the space over which this transition kernel is defined. - The new state in the transition. - The previous state in the transition. - The log probability of the transition. - - - - The interface which every sampler must implement. - - The type of samples this sampler produces. - - - - The random number generator for this class. - - - - - Keeps track of the number of accepted samples. - - - - - Keeps track of the number of calls to the proposal sampler. - - - - - Initializes a new instance of the class. - - Thread safe instances are two and half times slower than non-thread - safe classes. - - - - Gets or sets the random number generator. - - When the random number generator is null. - - - - Returns one sample. - - - - - Returns a number of samples. - - The number of samples we want. - An array of samples. - - - - Gets the acceptance rate of the sampler. - - - - - Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the - proposal distribution Q is symmetric in comparison to . It does need to - be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. - - The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the target distribution. - - - - - Evaluates the log transition probability for the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis-Hastings sampler using the default random number generator. This - constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - The log transition probability for the proposal distribution. - A method that samples from the proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal - distribution Q is symmetric. All densities are required to be in log space. - - The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - The type of samples this sampler produces. - - - - Evaluates the log density function of the sampling distribution. - - - - - A function which samples from a proposal distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - Constructs a new Metropolis sampler using the default random number generator. - - The initial sample. - The log density of the distribution we want to sample from. - A method that samples from the symmetric proposal distribution. - The number of iterations in between returning samples. - When the number of burnInterval iteration is negative. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q - and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to - to be normalized, but we do need that for each x, P(x) < Q(x). - - The type of samples this sampler produces. - - - - Evaluates the density function of the sampling distribution. - - - - - Evaluates the density function of the proposal distribution. - - - - - A function which samples from a proposal distribution. - - - - - Constructs a new rejection sampler using the default random number generator. - - The density of the distribution we want to sample from. - The density of the proposal distribution. - A method that samples from the proposal distribution. - - - - Returns a sample from the distribution P. - - When the algorithms detects that the proposal - distribution doesn't upper bound the target distribution. - - - - A hybrid Monte Carlo sampler for univariate distributions. - - - - - Distribution to sample momentum from. - - - - - Standard deviations used in the sampling of the - momentum. - - - - - Gets or sets the standard deviation used in the sampling of the - momentum. - - When standard deviation is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using the default random - number generator. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - specified by pSdv using a random - number generator provided by the user. A three point estimation will be used for differentiation. - This constructor will set the burn interval. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - Random number generator used to sample the momentum. - When the number of burnInterval iteration is negative. - - - - Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. - The momentum will be sampled from a normal distribution with standard deviation - given by pSdv using a random - number generator provided by the user. This constructor will set both the burn interval and the method used for - numerical differentiation. - - The initial sample. - The log density of the distribution we want to sample from. - Number frog leap simulation steps. - Size of the frog leap simulation steps. - The number of iterations in between returning samples. - The standard deviation of the normal distribution that is used to sample - the momentum. - The method used for numerical differentiation. - Random number generator used for sampling the momentum. - When the number of burnInterval iteration is negative. - - - - Use for copying objects in the Burn method. - - The source of copying. - A copy of the source object. - - - - Use for creating temporary objects in the Burn method. - - An object of type T. - - - - - - - - - - - - - Samples the momentum from a normal distribution. - - The momentum to be randomized. - - - - The default method used for computing the derivative. Uses a simple three point estimation. - - Function for which the derivative is to be evaluated. - The location where the derivative is to be evaluated. - The derivative of the function at the point x. - - - - Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using - a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. - - The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain - of the distribution P. - - - - - Evaluates the log density function of the target distribution. - - - - - The current location of the sampler. - - - - - The log density at the current location. - - - - - The number of burn iterations between two samples. - - - - - The scale of the slice sampler. - - - - - Constructs a new Slice sampler using the default random - number generator. The burn interval will be set to 0. - - The initial sample. - The density of the distribution we want to sample from. - The scale factor of the slice sampler. - When the scale of the slice sampler is not positive. - - - - Constructs a new slice sampler using the default random number generator. It - will set the number of burnInterval iterations and run a burnInterval phase. - - The initial sample. - The density of the distribution we want to sample from. - The number of iterations in between returning samples. - The scale factor of the slice sampler. - When the number of burnInterval iteration is negative. - When the scale of the slice sampler is not positive. - - - - Gets or sets the number of iterations in between returning samples. - - When burn interval is negative. - - - - Gets or sets the scale of the slice sampler. - - - - - This method runs the sampler for a number of iterations without returning a sample - - - - - Returns a sample from the distribution P. - - - - - Running statistics over a window of data, allows updating by adding values. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - Replace ties with their mean (non-integer ranks). Default. - - - Replace ties with their minimum (typical sports ranking). - - - Replace ties with their maximum. - - - Permutation with increasing values at each index of ties. - - - - Running statistics accumulator, allows updating by adding values - or by combining two accumulators. - - - This type declares a DataContract for out of the box ephemeral serialization - with engines like DataContractSerializer, Protocol Buffers and FsPickler, - but does not guarantee any compatibility between versions. - It is not recommended to rely on this mechanism for durable persistence. - - - - - Gets the total number of samples. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Evaluates the population skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - - - - Evaluates the population kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - - - - Update the running statistics by adding another observed sample (in-place). - - - - - Update the running statistics by adding a sequence of observed sample (in-place). - - - - - Create a new running statistics over the combined samples of two existing running statistics. - - - - - Statistics operating on an array already sorted ascendingly. - - - - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Returns the smallest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the largest value from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - - - - Returns the order statistic (order 1..N) from the sorted data array (ascending). - - Sample array, must be sorted ascendingly. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Estimates the median value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the p-Percentile value from the sorted data array (ascending). - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the first quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the third quartile value from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the inter-quartile range from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - - R-8, SciPy-(1/3,1/3): - Linear interpolation of the approximate medians for order statistics. - When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. - - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified - by 4 parameters a, b, c and d, consistent with Mathematica. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - a-parameter - b-parameter - c-parameter - d-parameter - - - - Estimates the tau-th quantile from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - Sample array, must be sorted ascendingly. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the quantile tau from the sorted data array (ascending). - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the sorted data array (ascending). - The rank definition can be specified to be compatible - with an existing system. - - - - - Extension methods to return basic statistics on set of data. - - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the maximum absolute value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The maximum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the minimum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Returns the maximum magnitude and phase value in the sample data. - Returns NaN if data is empty or if any entry is NaN. - - The sample data. - The minimum value in the sample data. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the sample mean, an estimate of the population mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - The mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the geometric mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the geometric mean of. - The geometric mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Evaluates the harmonic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the harmonic mean of. - The harmonic mean of the sample. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the variance from the provided full population. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - - - - Evaluates the standard deviation from the provided full population. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population skewness from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - - The full population data. - - - - Evaluates the skewness from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population kurtosis from the provided samples. - Uses a normalizer (Bessel's correction; type 2). - Returns NaN if data has less than four entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - - The full population data. - - - - Evaluates the kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - Returns NaN if data has less than three entries or if any entry is NaN. - Null-entries are ignored. - - The full population data. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population variance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the sample mean and the unbiased population standard deviation from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - The data to calculate the mean of. - The mean of the sample. - - - - Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. - Uses a normalizer (Bessel's correction; type 2). - - A subset of samples, sampled from the full population. - - - - Evaluates the skewness and kurtosis from the full population. - Does not use a normalizer and would thus be biased if applied to a subset (type 1). - - The full population data. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Estimates the unbiased population covariance from the provided samples. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - Null-entries are ignored. - - A subset of samples, sampled from the full population. - A subset of samples, sampled from the full population. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - The full population data. - The full population data. - - - - Evaluates the population covariance from the provided full populations. - On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The full population data. - The full population data. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - - The data to calculate the RMS of. - - - - Evaluates the root mean square (RMS) also known as quadratic mean. - Returns NaN if data is empty or if any entry is NaN. - Null-entries are ignored. - - The data to calculate the mean of. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the sample median from the provided samples (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the tau-th quantile from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile definition, to choose what product/definition it should be consistent with - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - Percentile selector, between 0 and 100 (inclusive). - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the p-Percentile value from the provided samples. - If a non-integer Percentile is needed, use Quantile instead. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the first quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the third quartile value from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates the inter-quartile range from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. - Approximately median-unbiased regardless of the sample distribution (R8). - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - One-based order of the statistic, must be between 1 and N (inclusive). - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Returns the order statistic (order 1..N) from the provided samples. - - The data sample sequence. - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Evaluates the rank of each entry of the provided samples. - The rank definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Quantile value. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the quantile tau from the provided samples. - The tau-th quantile is the data value where the cumulative distribution - function crosses tau. The quantile definition can be specified to be compatible - with an existing system. - - The data sample sequence. - Rank definition, to choose how ties should be handled and what product/definition it should be consistent with - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - The value where to estimate the CDF at. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - Quantile selector, between 0.0 and 1.0 (inclusive). - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Estimates the empirical inverse CDF at tau from the provided samples. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - - The data sample sequence. - - - - Calculates the entropy of a stream of double values in bits. - Returns NaN if any of the values in the stream are NaN. - Null-entries are ignored. - - The data sample sequence. - - - - Evaluates the sample mean over a moving window, for each samples. - Returns NaN if no data is empty or if any entry is NaN. - - The sample stream to calculate the mean of. - The number of last samples to consider. - - - - Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. - Can be used in a streaming way, e.g. on large datasets not fitting into memory. - - - - - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the smallest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Returns the largest absolute value from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the geometric mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the harmonic mean of the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N-1 normalizer (Bessel's correction). - Returns NaN if data has less than two entries or if any entry is NaN. - - First sample stream. - Second sample stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. - On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. - Returns NaN if data is empty or if any entry is NaN. - - First population stream. - Second population stream. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. - Returns NaN if data is empty or any entry is NaN. - - Sample stream, no sorting is assumed. - - - - Calculates the entropy of a stream of double values. - Returns NaN if any of the values in the stream are NaN. - - The input stream to evaluate. - - - - - Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. - - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The body to be invoked for each iteration range. - - - - Executes a for loop in which iterations may run in parallel. - - The start index, inclusive. - The end index, exclusive. - The partition size for splitting work into smaller pieces. - The body to be invoked for each iteration range. - - - - Executes each of the provided actions inside a discrete, asynchronous task. - - An array of actions to execute. - The actions array contains a null element. - At least one invocation of the actions threw an exception. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - The selected value. - - - - Selects an item (such as Max or Min). - - Starting index of the loop. - Ending index of the loop - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Selects an item (such as Max or Min). - - The array to iterate over. - The function to select items over a subset. - The function to select the item of selection from the subsets. - Default result of the reduce function on an empty set. - The selected value. - - - - Double-precision trigonometry toolkit. - - - - - Constant to convert a degree to grad. - - - - - Converts a degree (360-periodic) angle to a grad (400-periodic) angle. - - The degree to convert. - The converted grad angle. - - - - Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. - - The degree to convert. - The converted radian angle. - - - - Converts a grad (400-periodic) angle to a degree (360-periodic) angle. - - The grad to convert. - The converted degree. - - - - Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. - - The grad to convert. - The converted radian. - - - - Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. - - The radian to convert. - The converted degree. - - - - Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. - - The radian to convert. - The converted grad. - - - - Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). - - - - - Trigonometric Sine of an angle in radian, or opposite / hypotenuse. - - The angle in radian. - The sine of the radian angle. - - - - Trigonometric Sine of a Complex number. - - The complex value. - The sine of the complex number. - - - - Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. - - The angle in radian. - The cosine of an angle in radian. - - - - Trigonometric Cosine of a Complex number. - - The complex value. - The cosine of a complex number. - - - - Trigonometric Tangent of an angle in radian, or opposite / adjacent. - - The angle in radian. - The tangent of the radian angle. - - - - Trigonometric Tangent of a Complex number. - - The complex value. - The tangent of the complex number. - - - - Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. - - The angle in radian. - The cotangent of an angle in radian. - - - - Trigonometric Cotangent of a Complex number. - - The complex value. - The cotangent of the complex number. - - - - Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. - - The angle in radian. - The secant of the radian angle. - - - - Trigonometric Secant of a Complex number. - - The complex value. - The secant of the complex number. - - - - Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. - - The angle in radian. - Cosecant of an angle in radian. - - - - Trigonometric Cosecant of a Complex number. - - The complex value. - The cosecant of a complex number. - - - - Trigonometric principal Arc Sine in radian - - The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Sine of this Complex number. - - The complex value. - The arc sine of a complex number. - - - - Trigonometric principal Arc Cosine in radian - - The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). - The angle in radian. - - - - Trigonometric principal Arc Cosine of this Complex number. - - The complex value. - The arc cosine of a complex number. - - - - Trigonometric principal Arc Tangent in radian - - The opposite for a unit adjacent (i.e. opposite / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Tangent of this Complex number. - - The complex value. - The arc tangent of a complex number. - - - - Trigonometric principal Arc Cotangent in radian - - The adjacent for a unit opposite (i.e. adjacent / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cotangent of this Complex number. - - The complex value. - The arc cotangent of a complex number. - - - - Trigonometric principal Arc Secant in radian - - The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). - The angle in radian. - - - - Trigonometric principal Arc Secant of this Complex number. - - The complex value. - The arc secant of a complex number. - - - - Trigonometric principal Arc Cosecant in radian - - The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). - The angle in radian. - - - - Trigonometric principal Arc Cosecant of this Complex number. - - The complex value. - The arc cosecant of a complex number. - - - - Hyperbolic Sine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic sine of the angle. - - - - Hyperbolic Sine of a Complex number. - - The complex value. - The hyperbolic sine of a complex number. - - - - Hyperbolic Cosine - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic Cosine of the angle. - - - - Hyperbolic Cosine of a Complex number. - - The complex value. - The hyperbolic cosine of a complex number. - - - - Hyperbolic Tangent in radian - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic tangent of the angle. - - - - Hyperbolic Tangent of a Complex number. - - The complex value. - The hyperbolic tangent of a complex number. - - - - Hyperbolic Cotangent - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cotangent of the angle. - - - - Hyperbolic Cotangent of a Complex number. - - The complex value. - The hyperbolic cotangent of a complex number. - - - - Hyperbolic Secant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic secant of the angle. - - - - Hyperbolic Secant of a Complex number. - - The complex value. - The hyperbolic secant of a complex number. - - - - Hyperbolic Cosecant - - The hyperbolic angle, i.e. the area of the hyperbolic sector. - The hyperbolic cosecant of the angle. - - - - Hyperbolic Cosecant of a Complex number. - - The complex value. - The hyperbolic cosecant of a complex number. - - - - Hyperbolic Area Sine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Sine of this Complex number. - - The complex value. - The hyperbolic arc sine of a complex number. - - - - Hyperbolic Area Cosine - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosine of this Complex number. - - The complex value. - The hyperbolic arc cosine of a complex number. - - - - Hyperbolic Area Tangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Tangent of this Complex number. - - The complex value. - The hyperbolic arc tangent of a complex number. - - - - Hyperbolic Area Cotangent - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cotangent of this Complex number. - - The complex value. - The hyperbolic arc cotangent of a complex number. - - - - Hyperbolic Area Secant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Secant of this Complex number. - - The complex value. - The hyperbolic arc secant of a complex number. - - - - Hyperbolic Area Cosecant - - The real value. - The hyperbolic angle, i.e. the area of its hyperbolic sector. - - - - Hyperbolic Area Cosecant of this Complex number. - - The complex value. - The hyperbolic arc cosecant of a complex number. - - - - Hamming window. Named after Richard Hamming. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hamming window. Named after Richard Hamming. - Periodic version, useful e.g. for FFT purposes. - - - - - Hann window. Named after Julius von Hann. - Symmetric version, useful e.g. for filter design purposes. - - - - - Hann window. Named after Julius von Hann. - Periodic version, useful e.g. for FFT purposes. - - - - - Cosine window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Cosine window. - Periodic version, useful e.g. for FFT purposes. - - - - - Lanczos window. - Symmetric version, useful e.g. for filter design purposes. - - - - - Lanczos window. - Periodic version, useful e.g. for FFT purposes. - - - - - Gauss window. - - - - - Blackman window. - - - - - Blackman-Harris window. - - - - - Blackman-Nuttall window. - - - - - Bartlett window. - - - - - Bartlett-Hann window. - - - - - Nuttall window. - - - - - Flat top window. - - - - - Uniform rectangular (Dirichlet) window. - - - - - Triangular window. - - - - - Tukey tapering window. A rectangular window bounded - by half a cosine window on each side. - - Width of the window - Fraction of the window occupied by the cosine parts - -
-