Friday, June 28, 2019

Loopback Addresses

Every system has entire range of loopback address support, that is from 127.0.0.1 to 127.255.255.255

We can conform this by trying
ping -c 1 127.0.0.1
ping -c 1 127.0.0.2

Here, -c is count where were are fetching only 1 value.

***

Sunday, June 23, 2019

Default Ports for Datastax/Cassandra

22         ssh
4040 Spark application website port
5598, 5599  Public/internode for DSE FS
7080 Spark Master console port
7081 Spark Worker Website port
8182 Gremlin server port for DSE Graph
8983 Solr port
8090 Spark jobserver REST API port
9042 Native Transport port
9091 Datastax Studio server port
9077  AlwaysOn SQL webui port
9142 DSE client port when SSL is Enabled
9999 Spark Jobserver JMX Port
18080 Spark application history server port
8888 Opscenter Port
7000 DSE internode cluster communication
7001 DSE SSL internode cluster communicaton
7199 DSE JMX metrics port
7077 Spark master internode communication
8609 Internode-messaging service port
8443 Opscenter SSL port
61619 Opscenter Server/stomp port
61620 Datastax agents for stomp server communications
61621 Datastax agents for http/https service
3128  LCM targets

***

Wednesday, June 19, 2019

DSEFS Errors

Cause: DSEFS was enabled and used the default keyspace across different DC's

Steps in resolving the Issue:

1) From dse fs unmount the existing locations.
2) Disable dsefs in dse.yaml and restart all nodes in the DC one at a time.
3) Drop corresponding dsefs keyspace.
4) Change dse.yaml configurations with new parameters for keyspace, directory paths etc
5) Restart service on all nodes one at a time.
6) Change the Replication strategy and Replication factor of the new dsefs keyspace.
7) Do nodetool repair on keyspace.

***

Tuesday, June 18, 2019

Creating ext4 or xfs file system in EC2 for /dev/xvd*

Identify the disks by verifying with,
lsblk
ls /dev/xvd*

Use below command to know about device, and its file system. If the output shows simply data then there is no file system on the device.

file -s /dev/xvdd
/dev/xvdd: data

or If the device has a file system, it will show the file system type, as example below.
file -s /dev/xvdd
/dev/xvdd: Linux rev 1.0 ext4 filesystem data, UUID=5b906ad5-f67a-474e-a7d4-7787586c72a3 (needs journal recovery) (extents) (64bit) (large files) (huge files)
--------------------------------------------------

If there is no file system then you can create it by
mkfs.ext4 /dev/xvdd
mkdir -p /cassandra/data1
mount /dev/xvdd /cassandra/data1

or 

mkfs -t xfs /dev/xvdf
If there is no xfs installed do it by "yum install xfsprogs"
mkdir -p /cassandra/commitlog
mount /dev/xvdf /cassandra/commitlog
----------------------------------------------------
Make the changes persistent by adding the entires in /etc/fstab

as example:vi /etc/fstab
/dev/xvdd /cassandra/data1 ext4 defaults 0 0
/dev/xvdf /cassandra/commitlog ext4 defaults 0 0

then on command prompt
do
mount -a (refresh the file system)

***

Sunday, June 16, 2019

Compaction Threshold Settings in Cassandra

If there were multiple SSTables, read operations are less efficient as columns for associated key may be spread over multiple SSTables.

Adjusting Min and Max compaction thresholds helps in reducing frequent compactions but when it does, more SSTables are compacted.

nodetool getcompactionthreshold

Will give you current min = 4 and max = 32 for all tables which is default.

Change min and max threshold to 5 and 30 as below

nodetool setcompactionthreshold 5 30
Then alter the corresponding tables min_threshold and max_threshold accordingly as in this case to 5 and 30.

min_threshold, will compact minimum number of like sized SSTables and max_threshold controls the maximum number of SSTables that exist before minor compaction is forced. 

***

Monday, June 10, 2019

SAR: System Activity Reporter

SAR: System Activity Reporter(Available in systat package)

SAR is useful in understanding previous 10days statistics also, implies in go back in time.
sadc is the backend data collector for SAR.

SAR comes with two tools called sa1 and sa2

sa1: every 10 minutes(binary report)
sa2: summarize collected data each day(text file)

Both sa1 and sa2(sar) files will be loacted under /var/log/sa

Config file locations:
Ubuntu: /etc/default/sysstat - enable to true
CentOS: /etc/sysconfig/sysstat

SAR is basically a cron jobs
Ubuntu: /etc/cron.d/sysstat
Fedora systemd timer: /etc/systemd/system/sysstat.service.wants/sysstat-collect.timer (for sa1) & summary.timer (for sa2)

***