U-Net
> There is large consent that successful training of deep networks requires many thousand annotated training samples.
>In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently.
>The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization.
>We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
>Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin.
>Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .
2015
>... 画像を同じサイズの別の画像に変換するために良く使われるニューラルネットワークです。
>多くの画像セグメンテーションモデルや、画像変換でおなじみの pix2pix (Isola et al., 2017) などで広く使われているモデルです。
Uの字のような構造
>エンコーダーで徐々に高抽象度・低解像度の情報を抽出し、