DRAM is based on storing a charge in a capacitor. It matters because capacitors can be made quite compact, and they hold a charge long enough to be useful. The basic idea is to split the memory address into two parts, which correspond to row/column coordinates in a grid of capacitors, each storing one bit.
There are row and column lines (wires) across the grid, and at each intersection is a capacitor which connects to the column bus through a transistor, which is gated by the row bus. The row portion of the memory address is decoded, and selects one row wire. This activates the transistor in all the row's storage cells (bits), so each capacitor's charge is "read out" through the column wire to a "sense amplifier" and then into a buffer. The second part of the address is decoded, and selects bits from the this read-out buffer. To write, the same sort of row/column selection happens, but to set a bit, charge into the capacitor.
Remember that capacitors are volatile, though: this means that each one has to be "refreshed" periodically (on the order of milliseconds). Essentially, this is done by reading each row, amplifying and writing it back, and cycling through all the rows.
The structure of a real DRAM chip is somewhat more complicated than this, since many of these simple 2D array structures exist on the chip, some operating in parallel and some selected by other portions of the memory address. The DRAM protocol is very low-level, though, so a CPU has to provide a very complicated controller to run through the various states - putting portions of the address on the memory bus, waiting specific numbers of clocks, then other signals, etc. Probably the most important single timing parameter is the amount of time between providing the address and receiving data in a read operation - this latency has improved over years and DRAM generations, but has actually fallen dramatically behind the speed of CPUs. (It's on the order of 50 ns, which is hundreds of CPU clock cycles...