We consider the problem of dynamic spectrum access for network utility maximization in multichannel wireless networks. The shared bandwidth is divided into K orthogonal channels, and the users access the spectrum using a random access protocol. In the beginning of each time slot, each user selects a channel and transmits a packet with a certain attempt probability. After each time slot, each user that has transmitted a packet receives a local observation indicating whether its packet was successfully delivered or not (i.e., ACK signal). The objective is to find a multi-user strategy that maximizes a certain network utility in a distributed manner without online coordination or message exchanges between users. Obtaining an optimal solution for the spectrum access problem is computationally expensive in general due to the large state space and partial observability of the states. To tackle this problem, we develop a distributed dynamic spectrum access algorithm based on deep multi-user reinforcement leaning. Specifically, at each time slot, each user maps its current state to spectrum access actions based on a trained deep-Q network used to maximize the objective function. Experimental results have demonstrated that users are capable to learn good policies that achieve strong performance in this challenging partially observable setting only from their ACK signals, without online coordination, message exchanges between users, or carrier sensing.