Complex Event Processing (CEP) systems is an event pro- cessing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, the CEP system is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform match- ing over them. This work introduces a graph-based structure for contin- uous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP based state optimization - VEKG-Time Aggregated Graph (VEKG-TAG) is proposed over VEKG representation for faster event detection. VEKG- TAG is a spatiotemporal graph aggregation method that provides a sum- marized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Surveillance), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with F-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19X faster search time, achieving sub-second median latency of 4-20 milliseconds.