Speckle noise which is inherent to Synthetic Aperture Radar (SAR) imaging makes it difficult to detect targets and recognize spatial patterns on earth. Thus, despeckling is critical and used as a preprocessing step for smoothing homogeneous regions while preserving features such as edges and point scatterers. In this study, a low-memory version of the previously proposed sparsity-driven despeckling (SDD) method is proposed. All steps of the method are parallelized using OpenMP on CPU and CUDA on GPU. Execution time and despeckling performance are shown using real-world SAR images.