In this work, we investigate the end-to-end performance of the uplink (U/L) nonorthogonal multiple access (NOMA) system based on the novel Bi-directional long shortterm memory (Bi-LSTM) algorithm over the frequency flat independent and identically distributed (i.i.d.) Rayleigh fading channel conditions. When compared to conventional successive interference cancellation (SIC) MIMO-NOMA-based detection systems, the suggested deep learning (DL) based technique integrates the conventional multiple-input multiple-output (MIMO) and power-domain NOMA schemes to improve the symbol error rate (SER) performance. To this end, an optimal power allocation problem for the MIMONOMA scheme has been developed that maximizes the data throughput of the end-to-end system. In this work, both imperfect and perfect SIC schemes are considered, and performance comparison is provided between the Bi-LSTM based MIMO-NOMA and LSTM MIMONOMA schemes. The SIC NOMA system achieves 15 dB for 106 iterations, but the DL-based MIMO-NOMA scheme achieves 15 dB for 100 iterations. By a factor of four, Bi-LSTM MIMO-NOMA schemes outperform SIC MIMO-NOMA methods. Rather than utilizing conventional SIC systems to determine fading channel coefficients and decode signals, the suggested scheme estimates the relevant data symbol using the more efficient Bi-LSTM algorithm. There is a 4 dB difference, indicating that DL-based MIMO-NOMA outperforms conventional SIC MIMO-NOMA approaches. Furthermore, when the channel estimation error is enhanced from 0 to 1, the performance of DL is considerably decreased. Even with perfect channel state information (CSI), the DL detector outperforms the SIC detector for channel estimate errors of less than 0.07. When differences between the real and predictable channel states occur, the DL detector’s performance suffers significantly, yet it can still maintain its majority within a predetermined tolerance range.