Recently there has been large growth in the development of deep learning and other methodology to analyze medical images, such as CT and MRI scans. In addition to developments in localization and diagnosis of tumors in imaging scans, there has been work on using these medical scans in the prediction of time to event outcomes. However, with survival outcomes there are issues with bias due to right censoring, and methods to account for this bias with imaging datasets are not as widely studied. To predict survival past a time point for an imaging dataset of histologies of brain tumors, we implemented several neural networks with censoring unbiased loss functions, derived by Steingrimsson et al., 2018. We first used the pre-existing vgg16 neural network to extract features from the images and then trained neural networks with censoring unbiased loss functions (Buckley-James, Doubly Robust). The resulting predictions of these neural networks were compared to a Cox Proportional Hazards model using brier scores, and we found similar performance which motivates the future use of these methods for dealing with censoring bias in medical imaging analyses.