Estimate the amount of data based on a number of events per second – this calculates based on a typical event size. The more data you send to Splunk Enterprise, the more time Splunk needs to index it into results that you can search, report and generate alerts on.
Estimate the average daily amount of data to be ingested. The more data you send to Splunk Enterprise, the more time Splunk needs to index it into results that you can search, report and generate alerts on.
Daily Data Volume: ( events/s * bytes avg. event size * 3600 seconds/hour * 24 hours/day)
Specify the amount of time to retain data for each category. Data will be rolled through each category dependant on its age.
Total = XX
Specify the number of nodes required. The more data to ingest, the greater the number of nodes required. Adding more nodes will improve indexing throughput and search performance.
This is a breakdown of the overall storage requirement.
Hot, Warm | ||
Cold | ||
Archived | ||
Total |
Specify the location of the storage configuration. If possible, spread each type of data across separate volumes to improve performance. Hot/Warm data should be on the fastest disk, cold data on slower disk and archived data on the slowest disk.
Buckets | Storage Type | |
---|---|---|
Specify the RAID level, size of individual disks and contingency required for this volume. RAID configurations that stripe will yield significantly superior performance to parity based RAID. That is, RAID 0, 10 and 0+1 will give the best performance, while RAID 5 will offer the worst performance.
The selected storage configuration would typically be expected to achieve about IOPS when doing 100% read operation, and about IOPS for 100% write operation. These numbers assume that array is dedicated to Splunk and consists of with disk(s) (typically 200 IOPS per disk).
Number of Disks | ??? |
??? |
Physical Disk Space | ??? |
??? |
Effective Disk Space | ??? |
??? |
Specify the RAID level, size of individual disks and contingency required for this volume. RAID configurations that stripe will yield significantly superior performance to parity based RAID. That is, RAID 0, 10 and 0+1 will give the best performance, while RAID 5 will offer the worst performance.
The selected storage configuration would typically be expected to achieve about IOPS when doing 100% read operation, and about IOPS for 100% write operation. These numbers assume that array is dedicated to Splunk and consists of with disk(s) (typically 200 IOPS per disk).
Number of Disks | ??? |
??? |
Physical Disk Space | ??? |
??? |
Effective Disk Space | ??? |
??? |
Specify the RAID level, size of individual disks and contingency required for this volume. RAID configurations that stripe will yield significantly superior performance to parity based RAID. That is, RAID 0, 10 and 0+1 will give the best performance, while RAID 5 will offer the worst performance.
The selected storage configuration would typically be expected to achieve about IOPS when doing 100% read operation, and about IOPS for 100% write operation. These numbers assume that array is dedicated to Splunk and consists of with disk(s) (typically 200 IOPS per disk).
Number of Disks | ??? |
??? |
Physical Disk Space | ??? |
??? |
Effective Disk Space | ??? |
??? |
Specify the price per GB of storage. Hot/Warm data should be on the most expensive disk, cold data on cheaper disk and archived data on the cheapest disk.
Hot Warm | ? |
? |
|
Cold | ? |
? |
|
Archived | ? |
? |
|
Total | ? |
This is an example configuration file that describes the volume configuration for each data type. Note: The path should be modified to point to each disk type. It assumes that all data will be stored in the main index.
# volume definitions[volume:] path = /mnt/ maxVolumeDataSizeMB =[volume:] path = /mnt/ maxVolumeDataSizeMB =[volume:] path = /mnt/ maxVolumeDataSizeMB =[volume:] path = /mnt/# index definition (calculation is based on a single index) [main] homePath = volume:/defaultdb/db coldPath = volume:/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb homePath.maxDataSizeMB = coldPath.maxDataSizeMB = maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = maxDataSize = coldToFrozenDir = /mnt//defaultdb/frozendb
This is a list of potential features/enhancements that may be added in future. Please use the link on top of the page to send feedback and request additional features.
This sizing application is not supported by Splunk.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.