After the root access issue has been resolved in WebSphere MQ Explorer, I am able to make a queue, but there is a prompt showing the following details to me:
****************************************
* Command: /opt/mqm/bin/crtmqm Q1
****************************************
WebSphere MQ queue manager created.
Directory '/var/mqm/qmgrs/Q1' created.
The queue manager is associated with installation 'Installation1'.
AMQ6024: Insufficient resources are available to complete a system request.
exitvalue = 36
I wasn't sure whether this is a general greeting or welcome message after a queue has been created? But I have verified the queue was there in the path /var/mqm/qmgr. To further verify this message isn't friendly to me, I issue another command sudo strmqm Q1, I got the same result:$ sudo strmqm Q1
The system resource RLIMIT_NOFILE is set at an unusually low level for WebSphere MQ.
WebSphere MQ queue manager 'Q1' starting.
The queue manager is associated with installation 'Installation1'.
AMQ6024: Insufficient resources are available to complete a system request.
Clearly, this isn't a good thing. According to experts advice, there should be an error log locate at the path: /var/mqm/errors/. When I open the file located in that path, I see something like below:+-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
...
...
...
| Comment1 :- Failed to get memory segment: shmget(0x00000000, |
| 73834496) [rc=-1 errno=22] Invalid argument |
| Comment2 :- Invalid argument |
| Comment3 :- Configure kernel (for example, shmmax) to allow a |
| shared memory segment of at least 73834496 bytes |
| |
+-----------------------------------------------------------------------------+
This reminds me there are additional settings for WebSphere MQ on Linux systems needs to be configured. According to the guide, there is minimum configuration are required for WebSphere MQ:kernel.shmmni = 4096Among the settings, only shmmax and sem are not up to par. Below is what I have at current system:
kernel.shmall = 2097152
kernel.shmmax = 268435456
kernel.sem = 500 256000 250 1024
fs.file-max = 524288
kernel.pid-max = 120000
kernel.threads-max = 48000
$ cat /proc/sys/kernel/shmmax
33554432
$ cat /proc/sys/kernel/sem
250 32000 32 128
The following steps are what I did to fix this issue:- Open the file /etc/sysctl.conf.
- Append the require configuration for shmmax and sem to the end of the file.
- Reload the configuration with the command sysctl -p.
No comments:
Post a Comment