Linux Signals

UNIX信号 列表中,编号为1 ~ 31的信号为传统UNIX支持的信号,是不可靠信号(非实时的),编号为32 ~ 63的信号是后来扩充的,称做可靠信号(实时信号)。不可靠信号和可靠信号的区别在于前者不支持排队,可能会造成信号丢失,而后者不会。 下表为目前通用的UNIX信号: $ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 2) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 3) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 4) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD 5) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 6) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 7) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 31) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 32) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 33) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 34) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 35) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 36) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 37) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 38) SIGRTMAX 信号含义 下面我们对编号小于SIGRTMIN的信号进行讨论。

Bayesian Analysis

Terms Probability: The chance of the occurrence of an event Probability Mass: The chance of the occurrence of an random discrete event Probability Density: The probability that a continues event variable is exactly equal to some value Probability Mass Function (PMF): A function which gives the probability that a discrete random variable is exactly equal to some value. Probability Density Function (PDF): A function which gives the probability that a continues random variable is exactly equal to some value.

Diffie Hellman & Elliptic Curve

Diffie-Hellman is commutative, this arithmetic nature denotes that it’s easy to do the exponential calculation in one direction but rather computationally expensive in reverse.

IPsec Configuration With Strongswan & Iptables

Network Overview Assume that we have a typical testbed looks like this: The testbed is initially configured so that the clients (IoT devices) in each customer’s network access a local server. Your goal is to relocate the server functionality from the customers’ local networks to a cloud platform, which is represented by the network on the right-hand side. The Router in the above topology represents routing across the Internet between the customer sites and the cloud.

IPsec Protocol

IPsec Protocol consists of three parts: Authentication Header Encapsulating Security Payload IKE (Internet Key Extrange)

Machine Learning With LCMS

Learning Diary 2 1 Abstract In metabolite analysis, we are trying to study and annotate different molecules by their features. However, small molecules typically contain similar structures and the space of potential structures is tremendous (over 600 million). The amount of unique spectra covers only a small fraction of the entire database. With the input out of the LC-MS^2 process, the machine learning algorithm runs like a search engine that assigns ranked scores to predicted structures of high similarities.

HOFM

Abstract According to practical experience and medical treatment records, it shows that combinations of multiple drugs may positively effect the curing process regarding to certain disease. People are investigating about the effectiveness of the drug combinations with certian dosage against senarios when used respectively. However, the number of potencial combinations may grow exponentially when try to add more medicines into consideration, this refers as combinatorial explosion. Hence, we need to prioritize and reduce ranks for the search for combinations, whereas certrain machine learning methods can contribute.

Bidirectional LSTM

为什么用双向 LSTM? 单向的 RNN,是根据前面的信息推出后面的,但有时候只看前面的词是不够的, 例如, 我今天不舒服,我打算____一天。 只根据‘不舒服‘,可能推出我打算‘去医院‘,‘睡觉‘,‘请假‘等等,但如果加上后面的‘一天‘,能选择的范围就变小了,‘去医院‘这种就不能选了,而‘请假‘‘休息‘之类的被选择概率就会更大。 什么是双向 LSTM? 双向卷积神经网络的隐藏层要保存两个值, A 参与正向计算, A’ 参与反向计算。 最终的输出值 y 取决于 A 和 A’: 结构 我们已经看到了 Encoder-Decoder LSTM 的介绍中讨论的 LSTMs 输入序列的顺序的好处。 我们对源句子中颠倒词的改进程度感到惊讶。 — Sequence to Sequence Learning with Neural Networks, 2014. Bidirectional LSTMs 专注于通过输入和输出时间步长在向前和向后两个方向上获得最大的输入序列的问题。在实践中,该架构涉及复制网络中的第一个递归层,使得现在有两个并排的层,然后提供输入序列,作为输入到第一层并且提供输入序列到第二层的反向副本。这种方法是在不久前发展起来的一种用于改善循环神经网络(RNNs)性能的一般方法。 为了克服常规 RNN 的局限性…我们提出了一个双向递归神经网络(BRNN),可以使用所有可用的输入信息在过去和未来的特定时间帧进行训练。…我们的方法是将一个规则的 RNN 状态神经元分裂成两个部分,一部分负责正时间方向(正向状态),另外一个部分负责负时间方向(后向状态)。 — Bidirectional Recurrent Neural Networks, 1997. 该方法以及被应用于 LSTM 循环神经网络。向前和向后提供整个序列是基于假设整个序列是可用的假设的。在使用矢量化输入时,在这个实践中通常是一个要求。然而,它可能会引起哲学上的关注,其中理想的时间步长是按顺序和及时(just-in-time)提供的。在语音识别领域中,双向地提供输入序列是合理的,因为有证据表明,在人类中,整个话语的上下文被用来解释所说的话而不是一个线性解释。 …依赖于乍一看是违反因果关系的未来知识。我们如何才能理解我们所听到的关于海没有说过的话呢?然而,人类的厅总就是这样做的。在未来的语境中,声音、词语乃至于整个句子都是毫无意义的。我们必须记住的是,任务之间的区别是真的在线的——在每个输入之后需要一个输出,以及在某些输入段的末尾只需要输出。 — Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures, 2005. 虽然 Bidirectional LSTMs 被开发用于语音识别,但是使用双向输入序列是序列预测的主要因素,而 LSTMs 是提升模型性能的一种方法。

Convolutional Neural Network

1 Defination of convolution The mathmatical format of convolution is Continuous: $ Conv(x) = \int f(x-\alpha)t(\alpha)d\alpha $ Discreate: $ Conv(x)=\sum_{\alpha} f(x-\alpha)t(\alpha) $ Matrix: $ Conv(x) = (f * t)(x) $, where * represents the convolution process 2 Convolution Neural Network 2.1 Convolutional layer Terms W: width or height of the input matrix (输入的长度或宽度) F: receptive field (感受野) S: stride(步幅) P: zero-padding (补零的数量) K: depth of the output (深度,输出单元的深度) The output shape of after a convolution process follows: $$\frac{S}{W-F+2P} + 1$$

Gradient Decent Update Rule

Gradient descent update rule $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]}\tag{1} $$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]}\tag{2} $$ where L is the number of layers and αα is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are W[1] and b[1]. You need to shift l to l+1 when coding.

Information Content & Cross Entrophy

1 Information Content 1.1 Defination In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. The information content tells how much information is given. 1.2 Function The function of information content need to comply with both constraints: $$f(x) = \sum_{1}^{i} I(p_i)$$ $$x = \prod_{1}^{i} p_i$$ As a result, the function can be described as:

PageRank & SimRank

Random Walk Given a graph, a random walk is an iterative process that starts from a random vertex, and at each step, either follows a random outgoing edge of the current vertex or jumps to a random vertex. The jump part is important because some vertices may not have any outgoing edges so a walk will terminate at those places without jumping to another vertex. Page Rank (PR) measures stationary distribution of one specific kind of random walk that starts from a random vertex and in each iteration, with a predefined probability p, jumps to a random vertex, and with probability1-p follows a random outgoing edge of the current vertex.

Type, Object & Metaclass in Python

type和object的关系 一句话简述:types是objects的一个子类,objects是type的一个实例。在Python的世界中,object是类父子关系的顶端,所有的数据类型的父类都是它;type是类型实例关系的顶端,所有对象都是它的实例。它们两个的关系可以这样描述: 白板上的第一列,目前只有type,我们先把这列的东西叫Type。 白板上的第二列,它们既是第三列的类型,又是第一列的实例,我们把这列的对象叫TypeObject。 白板上的第三列,它们是第二列类型的实例,而没有父类(__bases__)的,我们把它们叫Instance。 详见知乎:jeff kit metaclass metaclass常被用来在类实例化前做一些动态更改类属性的事情,比如,依赖于自省,控制继承等等。它的作用简而言之为: 中断类的默认创建 修改类属性,方法等 返回修改后的类 metaclass继承自type,是类的类。 以下是一个最简单的metaclass例子 class MyMetaclass(type): def __new__(cls, name: str, bases: set, attrs: dict) -> type: # some custom process attrs_processed = {} for name, val in attrs.items(): if not name.startswith('__'): uppercase_attr[name.upper()] = val else: uppercase_attr[name] = val return super().__new__(cls, name, bases, attrs_processed) metaclass的一个主要用途就是构建API。Django(一个python写的web框架)的ORM就是一个例子。 用Django先定义了以下Model: class Person(models.Model): name = models.CharField(max_length=30) age = models.IntegerField() 然后执行下面代码: guy = Person.objects.get(name='bob') print guy.age # result is 35 这里打印的输出并不是IntegerField,而是一个int,int是从数据库中获取的。 这是因为models.

Eigenvectors and Matrix Decomposition

Basic concepts Linear Transformation A matrix can be seen as a linear transforamtion, for which the most important factors are the speed and direction: Eigenvalue is the velocity Eigenvector is the direction Rank The rank of the matrix represents the dimension. It also indicates the number of the eigenvectors (linear independent base vectors) of the transformation. Eigenvectors and Eigenvalue Matrix $A$ is a linear transformation, and it can be represents as follows: