Tuesday 10 December 2013

JDBC creation through Wsadmin

Configuring new data sources using wsadmin

You can configure new data sources using the wsadmin scripting tool.

Before you begin

Before starting this task, the wsadmin tool must be running. See the topic Starting the wsadmin scripting client for more information.In WebSphere® Application Server, any JDBC driver properties that are required by your database vendor must be set as data source properties. Consult the article Data source minimum required settings, by vendor to see a list of these properties and setting options, ordered by JDBC provider type. Consult your database vendor documentation to learn about available optional data source properties. Script them as custom properties after you create the data source. In the Related links section of this article, click the "Configuring new data source custom properties using scripting" link for more information.
[z/OS] You can also learn about optional data source properties in the Application Programming Guide and Reference for Java for your version of DB2® for z/OS®, if you use one of the following JDBC providers:
  • DB2 for z/OS Local JDBC Provider (RRS) JDBC Provider (using the DB2 JDBC / SQLJ driver)
  • DB2 Universal JDBC Driver provider

About this task

There are two ways to perform this task; use either of the following wsadmin scripting objects:
  • AdminTask object
  • AdminConfig object
AdminConfig gives you more configuration control than the AdminTask object. When you create a data source using AdminTask, you supply universally required properties only, such as a JNDI name for the data source. (Consult the article JDBCProviderManagement command group for the AdminTask object for more information.) Other properties that are required by your JDBC driver are assigned default values by Application Server. You cannot use AdminTask commands to set or edit these properties; you must use AdminConfig commands.

Procedure

  • Using the AdminConfig object to configure a new data source:
    1. Identify the parent ID, which is the name and location of the JDBC provider that supports your data source.
      • Using Jacl:
        set newjdbc [$AdminConfig getid /Cell:mycell/Node:mynode/JDBCProvider:JDBC1/]
      • Using Jython:
        newjdbc = AdminConfig.getid('/Cell:mycell/Node:mynode/JDBCProvider:JDBC1/')
        print newjdbc
      Example output:
      JDBC1(cells/mycell/nodes/mynode|resources.xml#JDBCProvider_1)
    2. Obtain the required attributes.
      Fastpath: For supported JDBC drivers, you can also script data sources according to the same pre-configured templates that are used by the administrative console logic. Consult the article Creating configuration objects using the wsadmin scripting tool for details.
      • Using Jacl:
        $AdminConfig required DataSource
      • Using Jython:
        print AdminConfig.required('DataSource')
      Example output:
      Attribute   Type
      name    String
      Tip: If the database vendor-required properties (which are referenced in the article Data source minimum required settings, by vendor) are not displayed in the resulting list of required attributes, script these properties as data source custom properties after you create the data source.
    3. Set up the required attributes.
      • Using Jacl:
        set name [list name DS1]
        set dsAttrs [list $name]
      • Using Jython:
        name = ['name', 'DS1']
        dsAttrs = [name]
    4. Create the data source.
      • Using Jacl:
        set newds [$AdminConfig create DataSource $newjdbc $dsAttrs]
      • Using Jython:
        newds = AdminConfig.create('DataSource', newjdbc, dsAttrs)
        print newds
      Example output:
      DS1(cells/mycell/nodes/mynode|resources.xml#DataSource_1)
  • Using the AdminTask object to configure a new data source:
    • Using Jacl:
      $AdminTask createDatasource {-interactive}
    • Using Jython:
      AdminTask.createDatasource (['-interactive'])
  • Save the configuration changes. See the topic Saving configuration changes with the wsadmin tool for more information.
  • In a network deployment environment only, synchronize the node. See the topic Synchronizing nodes with the wsadmin tool for more information.



Sunday 1 December 2013

Websphere MQ Notes

1. What is MQ and what does it do?
Ans. MQ stands for MESSAGE QUEUEING. WebSphere MQ allows application programs to use message queuing to participate in message-driven processing. Application programs can communicate across different platforms by using the appropriate message queuing software products.
2. What is Message driven process?
Ans . When messages arrive on a queue, they can automatically start an application using triggering. If necessary, the applications can be stopped when the message (or messages) have been processed.
3. What are advantages of the MQ?
Ans. 1. Integration.
2. Asynchrony
3. Assured Delivery
4. Scalability.
4. How does it support the Integration?
Ans. Because the MQ is independent of the Operating System you use i.e. it may be Windows, Solaris,AIX.It is independent of the protocol (i.e. TCP/IP, LU6.2, SNA, NetBIOS, UDP).It is not required that both the sender and receiver should be running on the same platform
5. What is Asynchrony?
Ans. With message queuing, the exchange of messages between the sending and receiving programs is independent of time. This means that the sending and receiving application programs are decoupled; the sender can continue processing without having to wait for the receiver to acknowledge receipt of the message. The target application does not even have to be running when the message is sent. It can retrieve the message after it is has been started.
6. What are the hardware and Software requirements for MQ Installation in AIX?
Ans. WebSphere MQ for AIX, V5.3 runs on any machine that supports the AIX V4.3.3 PowerPC® 32.bit, or AIX® V5.1 Power 32 bit only operating system.
Disk Storage: Typical storage requirements are as follows:
1 Server installation: 50 MB
2. Client installation: 15 MB
3 Data storage (server): 50 MB
4. Data storage (client): 5 MB.
Software Requirements:
Operating system: The operating systems supported by WebSphere MQ for AIX, V5.3 are:
1. AIX V4.3.3, with PTF U472177, running in a 32 bit environment, on 32 or 64 bit hardware.
2. AIX V5.1, with PTFs U476879, U477366, U477367 and U477368, and APAR fix IY29345 running 32 bit kernel running on 32 or 64 bit hardware.
3. AIX V5.1, with PTF U476879, U477366, U477367 and U477368, and APAR fix IY29345 running 64 bit kernel running on 64 bit hardware.
Connectivity The network protocols supported by WebSphere MQ for AIX, V5.3 are:
1. TCP/IP
2. SNA LU 6.2.
Databases: DB2 7.1, 7.2
Oracle 8i and 9i
Sybase v12 or v 12.5
Java: If you want to use the Java Messaging Support, you need the Java Runtime Environment Version 1.3 or later
What are the software and hardware requirements for installing MQ on Windows?
Ans: MQ v 5.3 supports Windows 2000, Windows 2000XP,Windows 2000NT,
Windows 2003 SE, Windows 2003EE.
Disk Storage: Typical storage requirements are as follows:
1 Server installation: 50 MB
2. Client installation: 15 MB
3 Data storage (server): 50 MB
4. Data storage (client): 5 MB.
Connectivity The network protocols supported by WebSphere MQ for AIX, V5.3 are:
1. TCP/IP
2. SNA LU 6.2.
3. LU 6.2
4. NetBIOS
Databases: DB2 7.1, 7.2
Oracle 8i and 9i
Sybase v12 or v 12.5
Java: If you want to use the Java Messaging Support, you need the Java Runtime Environment Version 1.3 or later
7. What is a Message and what does it contain?
Ans: A message is a string of bytes that is meaningful to the applications that use it. Messages are used to transfer information from one application program to another (or between different parts of the same application). The applications can be running on the same platform, or on different platforms.
WebSphere MQ messages have two parts:
1. The application data. The content and structure of the application data is defined by the application programs that use it.
2. A message descriptor. The message descriptor identifies the message and contains additional control information, such as the type of message and the priority assigned to the message by the sending application. WebSphere MQ defines the format of the message descriptor. For a complete description of the message descriptor,
8. What is the Max Length of the message does MQ support/
Ans: The default maximum message length is 4 MB, although you can increase this to a maximum length of100 MB (where 1 MB equals 1 048 576 bytes).
9. What is the difference between Persistent and Non Persistent Messages?
Ans: In Web Sphere MQ, messages can be either persistent or non persistent. Persistent messages are logged and can be recovered in the event of a WebSphere MQ failure. Thus, persistent messages are guaranteed to be delivered once and only once. Nonpersistent messages are not logged. Web Sphere still guarantees to deliver them not more than once, but it does not promise to deliver them once.
10. What is the effect of using Persistant messages?
Ans: Persistent messages are usually logged. Logging messages reduces the performance of your application, so use persistent messages for essential data only. If the data in a message can be discarded if the queue manager stops or fails, use a nonpersistent message.
WebSphere MQ messages:
Messages are made up of Two parts: Message descriptor, Application data
Types of messages?
Datagram: A Message sent with no response expected.
Request: A Message sent for which a response is expected.
Reply: A Response Message for a requested message.
Report: A Message that describes the occurrence or event
Ex COA/COD
Sizes ?
Qmanagerà10000 Msgs Maxmsglengthà4 Mb
Queueà5000 Msgs Maxmsglengthà4 Mb
11. What is the attribute used to see the Message length?
Ans: MaxMsgLength
12. What is MQ Client?
Ans: A Web Sphere MQ client is a component that allows an application running on a system to issue MQI calls to a queue manager running on another system. The output from the call is sent back to the client, which passes it back to the application.
13. What is MQ Server?
Ans: A Web Sphere MQ server is a queue manager that provides queuing services to one or more clients. All the Web Sphere MQ objects, for example queues, exist only on the queue manager machine (the Web Sphere MQ server machine), and not on the client. A Web Sphere MQ server can also support local Web Sphere MQ
Applications
14. What are the Objects used in Web sphere MQ?
Ans:
1. Queue Manager
2. Queues
3. Channels
4. Processes
5. Name lists.
15. Mention the No of Characters required for creating names of the MQ objects?
Ans: For MQ Channels it is 20 Characters
For Remaining objects it is 48 characters.
16. What about is the Default port number for MQ Queue Manager?
Ans: 1414
7. Difference between MQSC commands and Control commands?
MQSC Commands – These commands are used to handle the admin related functions for the components that are present in the MQ Series. In general MQSC commands are used for creating and maintaining Message channels, Queue Managers, Clusters etc…
Control Commands – These commands are used to manage the processes and services that are helpful in the functioning of the MQ Series. In general these commands are used for Channel listener, Channel Initiator, Trigger monitor etc…
18. Is the MQSC attributes are Case sensitive?
Ans: MQSC commands, including their attributes, can be written in uppercase or lowercase. Object names in MQSC commands are folded to uppercase (that is, QUEUE and queue are not differentiated), unless the names are enclosed within single quotation marks. If quotation marks are not used, the object is processed with a name in uppercase.
SCRIPT COMMANDS:-
After entering in to queue manager we can find script commands.
Script commands are same for every queue manager.
(These Commands should be used in CAPITAL LETTERS)
· DEFINE :-To define/create MQ manager objects like queue,
Channels, process, and listener.
· ALTER :-to update or modify the existing objects
· DISPLAY :-to view all the properties of a particular object or to
Display all objects
· DELETE :-to delete created objects
· CLEAR :-to clear the message from the queue
· END :-to come out of the queue manager
· PING :-to check whether other side channel / queue manager is ready to accept our request.
· START :- to start the particular channel or listener
· STOP :-to stop particular channel or listener
· REFRESH :-used to refresh the security every time after giving or executing, set mgr or command for queue manager or object
· RESET :-used to reset channel,cluster,queue manager
· RESOLVE :-to resolve the channel which is in indoubt state
· SUSPEND :-to suspend a queue manager from a cluster environment
· RESUME :-to remove a queue manager from a cluster environment
19. How can we write the MQSC commands that have too many parameters/
Ans: For commands that have too many parameters to fit on one line, use continuation characters to indicate that a command is continued on the following line:
1. A minus sign ( ) indicates that the command is to be continued from the start of _ the following line.
2. A plus sign (+) indicates that the command is to be continued from the first nonblank character on the following line.
20. What is programmable command format (PCF) commands?
These commands are issued from a programme for local or remote administration done by programmers.
21. What are commands used for creating the Queue manager from the Command prompt?
Ans: crtmqm -q -d MY.DEFAULT.XMIT.QUEUE -u DEAD.LETTER.QUEUE QM1
Here -q used to define the Queue manager QM1 as a Default Queue manager
-d is used to define the default transmission Queue -u is used to define the default dead letter queue.
22. How can U make the existing Queue Manager as an default Queue Manager?
Ans: On Windows systems, use the Web Sphere MQ Services snap-in to display the properties of the queue manager, and check the Make queue manager the default box. You need to stop and restart the queue manager for the change to take effect.
23. Where will be the backup files present after creating the Queue Manager?
Ans: Windows systems: If you use Web Sphere MQ for Windows NT and Windows 2000, configuration information is stored in the Windows Registry.
UNIX Systems: 1. When you install the product, the Web Sphere MQ configuration file (mqs.ini) is created. It contains a list of queue managers that is updated each time you create or delete a queue manager. There is one mqs.ini file per node.
2. When you create a new queue manager, a new queue manager configuration file (qm.ini) is automatically created. This contains configuration parameters for the queue manager.
24. What is the command used for starting the Queue Manager?
Ans: strmqm QMName
25. What is the command used for stopping the Queue manager?
Ans: endmqm -w QMName
The command waits until all applications have stopped and the queue manager has ended.
endmqm –i QMName
This type of shutdown does not wait for applications to disconnect from the queue manager.
26. What’s the message code for Stopping a Queue Manager?
AMQ4044 Queue manager stopping
27. What is the command used to delete the QueueManager?
Ans: dltmqm QMName
28. Display the attributes of the Queue Manager QM1?
Ans: runmqsc QM1 Display qmgr
What is Queue?
Ans: A queue is a data structure used to store messages. A queue manager owns each queue. The queue manager is responsible for maintaining the queues it owns, and for storing all the messages it receives onto the appropriate queues
29. What is the Default max Queue depth?
Ans 5000
30. What the different Types of Queues?
Local Queue Remote Queues Alias Queues
Model Queue Dynamic Queues Cluster Queues.
Queue: A safe place to store messages for Prior-To-Delivery, it belongs to the Qmgr to which the application is connected.
Model Queue: Model queue is a template of a queue definition that uses when creating a dynamic queue.
Alias Queue: Queue definition, which is Alias to an actual Local or Remote Q. Used for security and easy maintenance.
Remote Queue: Object that defines a Queue belongs to another Q Manager (Logical Def).
Initiation Queue: An initiation queue is a local queue to which the queue manager writes a trigger message when certain conditions are met on another local queue
Dynamic Queue: Such a queue is defined “on the fly” when the application needs it. Dynamic queues may be retained by the queue manager or automatically deleted when the application program ends. Use- To store intermediate results.
Cluster Queue: Custer queue is a local queue that is known throughout a cluster of queue managers.
Reply-To-Queue: A request message must contain the name of the queue into which the responding program must put the Reply Message.
Queue Manager: Provides Messaging services and manages the Queues, Channels, and Processes that belongs to it.
Alias Q Manager: Queue-manager aliases, are created using a remote-queue definition with a blank RNAME.
31. What are the attributes required for the Remote Queue Definition?
Ans: 1.Name of the Queue 2. Transmission Queue Name.
3. Remote QueueManager name 4. Remote Local Queue Name
32. How can U define Queues in MQ?
Ans: Queues are defined to Web Sphere MQ using:
1. The MQSC command DEFINE
2. The PCF Create Queue command
33. What is Transmission Queue?
Ans: Transmission queues are queues that temporarily store messages that are destined for a remote queue manager. You must define at least one transmission queue for each remote queue manager to which the local queue manager is to send messages directly.
34. What is Initiation Queues?
Ans: Initiation queues are queues that are used in triggering. A queue manager puts a trigger message on an initiation queue when a trigger event occurs. A trigger event is a logical combination of conditions that is detected by a queue manager.
35. What is Dead Letter Queue?
Ans: A dead-letter (undelivered-message) queue is a queue that stores messages that cannot be routed to their correct destinations. This occurs when, for example, the destination queue is full. The supplied dead-letter queue is calledSYSTEM.DEAD.LETTER.QUEUE. For distributed queuing, define a dead-letter queue on each queue manager involved.
36. What is the Max size that Queues support in MQ v5.3?
Ans.They support around 2GB of Size
37. How can u create a Transmission Queue from a local Queue?
Ans. Change the usage attribute from normal to Transmission
38. Define a Local Queue LQ using the MQSC Commands in the QM QM1
Ans: runmqsc QM1
Define qlocal (LQ)
39. What are the Difference B/W Predefined & Dynamic Queues?
Ans: Queues can be characterized by the way they are created:
1. Predefined queues are created by an administrator using the appropriate MQSC or PCF commands. Predefined queues are permanent; they exist independently of the applications that use them and survive Web Sphere MQ restarts.
2 Dynamic queues are created when an application issues an MQOPEN request specifying the name of a model queue. The queue created is based on a template queue definition, which is called a model queue.
40. What is the Algorithm followed in retrieving the Messages from the Queue?
Ans: 1.First-in-first-out (FIFO).
2.Message priority, as defined in the message descriptor. Messages that have the same priority are retrieved on a FIFO basis.
3. A program request for a specific message.
41. What is Process Definition and what are the attributes does it contain?
Ans: A process definition object defines an application that starts in response to a trigger event on a WebSphere MQ queue manager. The process definition attributes include the application ID, the application type, and data specific to the application.
42. What is intercommunication and its components to send message ?
What is Intercommunication?
Ans: In Web Sphere MQ, intercommunication means sending messages from one Queue manager to another. The receiving queue manager could be on the same machine or another; nearby or on the other side of the world. It could be running on the same platform as the local queue manager, or could be on any of the platforms supported by Web Sphere MQ. This is called a distributed environment.
Message channels Message channel agents
Transmission queues Channel initiators and listeners
Channel-exit programs
43. What is Distributed Queue Management (DQM).
Web Sphere MQ handles communication in a distributed environment such as this using DQM.The local queue manager is sometimes called the source queue manager and the remote queue manager is sometimes called the target queue manager or the partner queue manager.
44. What is the Objects required for the DQM?
Ans: On source QueueManager:
1. Transmission Queue 2. Remote queue definition.
3. Dead Letter Queue(recommended) 4. Sender Channel
On Target Queue Manager
1. Local Queue 2. Dead Letter Queue 3. Receiver Channel 4.Listenr
***.The sender and receiver channels names should be same.
45. What is channel and mention different types of channels in MQ?
Ans: Channels are objects that provide a communication path from one queue manager to another. Channels are used in distributed queuing to move messages from one queue manager to another. They shield applications from the underlying communications protocols. The queue managers might exist on the same, or different, platforms. Different types of Channels:
1. Sender-Receiver Channels
2. Requester-Server Channels
3. Client Connection channels
4. Server Connection Channels.
5. Cluster Sender.
6. Cluster Receiver Channels
46. What are MQI channels and there types?
MQI channels are the channels that carry messages from MQ Client application to the MQ server and vice versa.They are Bi-directional Channels
1. Server-connection
2. Client-connection
47. How many Channel Combinations?
1. Sender-receiver Channel
2. Requester-sender Channel
3. Cluster-Sender- Receiver Channel
4. Requester-server Channel
5. Server-receiver Channel
6. Client-Server Channel
48. What are the attributes required for the Sender Channel?
Ans: 1. The Name of the Channel 4.Transport Type
2. The Connection name 5.Scyexit
49. What are different Channel status?
Ans: Channel Status:
1. Inactive 3.Retrying
2. Running 4.Stopped
50. What about Initializing & Binding states?
Ans: Before running state first the channel will initializes the listener & Binds with the Receiver Channel then it goes into running mode.
51. Tell me Some Channel Attributes?
Batch Heartbeat Interval (BATCHHB): This heartbeat interval allows a sending channel to verify that the receiving channel is still active just before committing a batch of messages. If the receiving channel is not active, the batch can be backed out rather than becoming in-doubt, Batch interval (BATCHINT), Batch size (BATCHSZ), Channel type (CHLTYPE), Cluster (CLUSTER), Cluster namelist (CLUSNL), Connection name (CONNAME), Convert message (CONVERT), Disconnect interval (DISCINT), Heartbeat interval (HBINT), KeepAlive Interval (KAINT), Long retry count (LONGRTY), Long retry interval (LONGTMR), Maximum message length (MAXMSGL)
52. Why is Channel RETRYINT attribute used?
If a message is places in DLQ because of put inhibited or queue full condition, the DLQ handler attempts to put the message back to the destination queue. This interval is called as RETRYINT by default the retry interval is 60 seconds.
Receiver Cluster Receiver Requester
53. What is channel disconnect interval?
This is a time-out attribute, specified in seconds, for the server, cluster-sender, and cluster-receiver channels. The interval is measured from the point at which a batch ends, that is when the batch size is reached or when the batch interval expires and the transmission queue becomes empty. If no messages arrive on the transmission queue during the specified time interval, the channel closes down
54. Explain the channel attribute BATCHSIZE?
BATCHSIZE denotes the maximum number of messages that can be sent through a channel before taking a checkpoint. This parameter is valid only for channels with a channel type (CHLTYPE) of SDR, SVR, RCVR, RQSTR, CLUSSDR, or CLUSRCVR. The value must be in the range 1 through 9999.
55. What is BATCH HEARTBEAT INTERVAL?
Ans: The batch heartbeat interval allows a sending channel to verify that the receiving channel is still active just before committing a batch of messages, so that if the receiving channel is not active, the batch can be backed out rather than becoming in-doubt, as would otherwise be the case. By backing out the batch, the
messages remain available for processing so they could, for example, be redirected to another channel.
56. What is Keep Alive Interval?
Ans: The Keep Alive Interval parameter is used to specify a time-out value for a channel. The Keep Alive Interval parameter is a value passed to the communications stack specifying the Keep Alive timing for the channel. It allows you to specify a different keep alive value for each channel. The value indicates a time, in seconds, and must be in the range 0 to 99999.
57. What is LONG Retry count?
Ans: Specify the maximum number of times that the channel is to try allocating a session to its partner. If the initial allocation attempt fails, the short retry countnumber is decremented and the channel retries the remaining number of times.
58. What are the ways to start a channel?
Use the MQSC command START CHANNEL
Use the control command runmqchl to start the channel as a process
Use the channel initiator to trigger the channel
Type of channel states:
Inactive and Current- Stopped, Starting, Retrying and Active
59. What are the three options for stopping channels?
QUIESCE FORCE TERMINATE
60. What are the components of message channel?
A queue manager to communicate with another queue manager uses message channel. The components of a message channel are
1. Sender Message channel agent: Sender MCA is a program that transfers messages from a transmission queue to a communication link
2. Receiver MCA: It transfers messages from the communication link into the target queue
3. Communication protocol: Responsible for transferring messages A message channel is unidirectional.
61. What is Message Channel agent (MCA)?
Ans: A message channel agent (MCA) is a program that controls the sending and receiving of messages. There is one message channel agent at each end of a channel. One MCA takes messages from the transmission queue and puts them on the communication link. The other MCA receives messages and delivers them onto a queue on the remote queue manager.
A message channel agent is called acaller MCA if it initiated the communication; otherwise it is called a responder MCA.
62. What is Channel initiator and Listeners?
Ans: A channel initiator acts as a trigger monitor for sender channels, because a transmission queue may be defined as a triggered queue. When a message arrives on a transmission queue that satisfies the triggering criteria for that queue, a message is sent to the initiation queue, triggering the channel initiator to start the
appropriate sender channel. You can also start server channels in this way if you specified the connection name of the partner in the channel definition. This means that channels can be started automatically, based upon messages arriving on the appropriate transmission queue.
You need a listener program to start receiving (responder) MCAs. Responder MCAs are started in response to a startup request from the caller MCA; the channel listener detects incoming network requests and starts the associated channel.
63. What are Errors which cause the channel to go into re-trying state?
Due to: 1. Xmitq is set to get disabled
2. Network Issues
3.QueueManager Stopped
4. Listener is not running
5.TriggerTurned Off
64. Explain Channel-Exit programs and what are the types?
Channel-exit programs are called at defined places in the processing carried out by MCA programs
Security Exit: You can use security exit programs to verify that the partner at the other end of a channel is genuine
Message Exit: Message Exit can be used for Encryption on the link, message data conversion, validation of user ID,
Message-retry Exit: Message-retry exit is called when an attempt to open the target queue is unsuccessful
Sender and receiver Exit: You can use the send and receive exits to perform tasks such as data compression and decompression
Channel auto-definition ExitTransport-retry Exit
65. What is the Different Logging Methods available?
Ans: There are two different types available
1. Circular: The circular logging is used for restart recovery. It is the default logging method. Circular is used in Development and Testing Queues. Circular logging keeps all restart Data in a ring of log files. Logging fills the first file in the ring, then moves on to the and so on, until all the files are full. It then goes back to the first file in the ring and starts This continues as long as the product is in use, and has the advantage that you never run out of log files.
2. Linear: Linear logging gives you both restart recovery and media recovery. It is used in Production. Linear logging keeps the log data in a continuous Sequence of files. Space is not reused, so you can always retrieve any record logged from the time that the queue manager was created. As disk space is finite, you might have to think about some form of archiving. It is an administrative task to manage your disk space for the log, reusing Or extending the existing space as necessary.
66. What is the Default location where the logs are stored and mention the default sizes?
Ans: Default location:
Windows: C:Program FilesIBMWebSphere MQlogqmgr
UNIX: /var/mqm/log
67. What is the log file size?
Ans: In Web Sphere MQ for Windows NT and Win 2000, the minimum value is 32, and the maximum is 16 384. The default value is 256, giving a default log size of 1 MB.
In Web Sphere MQ for UNIX systems, the minimum value is 64, and the maximum is 16 384. The default value is 1024, giving a default log size of 4 MB.
68. How will you change the log file size?
Ans ; You cannot change the log file size. For this you need to drop and re-create the queue manager. The number of log files primary & secondary can be changed but you need to restart the Q manager for the changes to take effect.
69. what is the number for log primary and secondary file allocated?
Ans: Primary log files: The number of primary log files to be allocated is 3 by default the minimum is 2 and MAX in Win 253 / Unix 510
Secondary log files: The number of secondary log files to be allocated is 2 by default the minimum is 1 and MAX in Win 252 / Unix 509
70. What is the command used for creating the listener?
Ans: crtmqlsr -t tcp -m QMNAME -p portno
71. What is the commands used for running listener in 5.3 Version?
Ans: runmqlsr -t tcp -m QMNAME -p portno
72. What is command used to perform task on the MQ services?
Ans: amqmdain
73. What are commands used on the Command server?
Ans: 1.strmqcsv: to start the command server
2. dspmqcsv: to display the command server
3. endmqcsv: To end the command server.
74. Is there is any chance for the Message lost?
Ans: If the target queuemanager doesn.t contain the dead letter queue defined and if the messages are running on a fast channel and of non persistant,Then there is a chance of the message loss.
75. What is the command that is used to provide authorization for the clients?
Ans: setmqaut -m QMName -t queue -n Queuename -p GUEST +all
What are the common errors u get in DQM? Explain how to resolve ?
Ans: mqrc 2058: MQRC_Q_MGR_NAME_ERROR
Mqrc 2059: MQRC_Q_MGR_NOT_AVAILABLE.
Mqrc 2033: MQRC_NO_MSG_AVAILABLE.
Mqrc 2085: MQRC_UNKNOWN_OBJECT_NAME.
Mqrc 2009: MQRC_CONNECTION_BROKEN.
Mqrc 2043: MQRC_OBJECT_TYPE_ERROR.
Mqrc 2086: MQRC_UNKNOWN_OBJECT_Q_MGR.
Mqrc 2035: MQRC_NOT_AUTHORIZED.
76. What are different modes in which a application can connect to a Queuemanager?
Ans: 1.Binding mode: In binding mode, also known as server connection, the communication to the queue manager utilizes inter-process communications. One of the key factors that should be kept in mind is that binding mode is available only to programs running on the MQSeries server that hosts the queue manager. A program using binding mode will not run from an MQSeries client machine. Binding mode is a fast and efficient way to interact with MQSeries. Certain Facilities, such as XA transaction co-ordination by queue manager, are available only in binding mode.
2. Client Connection: Client connection uses a TCP/IP connection to the MQSeriesServer and enables communications with the queue manager. Programs using client connections can run on an MQSeries client machine as well as on an MQSeries server machine. Client connections use client channels on the queue manager to communicate with the queue manager. The client connection does not support XA transaction coordination by the queue manager.
77. What are the different types of messaging systems used by JMS?
Ans: JMS applications use either the point-to-point (PTP) or publish/subscribe style of messaging.
Point-to-Point: Point-to-point messaging involves working with queues of messages. The sender sends messages to a specific queue to be consumed normally by a single receiver. In point-to-point communication, a message has at most one recipient. A sending client addresses the message to the queue that holds the messages for the intended (receiving) client.
Publish/Subscribe: In contrast to the point-to-point model of communication, the publish/subscribe model enables the delivery of a message to multiple recipients. A sending client addresses, or publishes, the message to a topic to which multiple clients can be subscribed. There can be multiple publishers, as well as subscribers, to a topic.
78. Is It Possible to use one transmission Queue for the multiple message channels?
Ans: It is possible to define more than one channel per transmission queue, but only one of these channels can be active at any one time. This is recommended for the provision of alternative routes between queue managers for traffic balancing and link failure corrective action. A transmission queue cannot be used by another channel if the previous channel to use it terminated leaving a batch of messages in-doubt at the sending end.
79. What is the command used to test whether the channel is active or not?
Ans: runmqsc QMName
Ping channel (channel name).
80. What are the administrative commands that are used in Publish and Subscribe?
Ans: The strmqbrk command is used to start a broker. The first time this command is run on a queue manager, all the relevant MQSeries objects are automatically created.
——–strmqbrk -m MYQMGRNAME
The dspmqbrk command is used to check the status of the broker. Possible states are: starting, running, stopping, quiescing, not active and ended abnormally.
——–dspmqbrk -m MYQMGRNAME
The endmqbrk command is used to stop a broker. There are two options: -c requests a controlled shutdown (default), -i requests an immediate shutdown.
——-endmqbrk -i -m MYQMGRNAME
81. What is multiple hoping or Concept of Hop-through?
Ans: If there is no direct communication link between the source queue manager and the target queue manager, it is possible to pass through one or more intermediate queue managers on the way to the target queue manager. This is known as a multi-hop.
82. What is Local administration and Remote administration?
Local Administration: Means carrying out administration tasks on any queue managers you have defined on your local system.
Remote Administration: This allows you to issue commands from your local system that are processed on another system. For example, you can issue a remote command to change a queue definition on a remote queue manager. You do not have to log on to that system, although you do need to have the appropriate channels defined. The queue manager and command server on the target system must be running
83. Difference between Control Commands used in Windows and other Os?
Control commands on are case sensitive other OS but Windows they can be used any way.
84. What is MQOO_BIND_ON_OPEN option on the MQOPEN call ?
When this attribute is set it forces all the messages sent to this queue to be sent to the same instance of the queue (go to the same queue in cluster)
85. Difference between MQPUT and MQPUT1 call ?
The MQPUT1 call always operates as though MQOO_BIND_NOT_FIXED were in effect, that is, it always invokes the workload management routine.
86. When is Channel security exit program called?
Are called at MCA initiation and termination
Stopping unauthorized queue managers putting messages on your queues
Use OS security, Object Authority Manager (OAM) on WebSphere MQ user-written procedures
87. What happens if DEAD letter Queue is not defined?
If dead letter queue are not defined the Messages are placed on the Transmission Queue and the Queue Manager become Inactive
88. Explain Remote queue definitions? Advantages?
These are definitions for queues that are owned by another queue manager
Advantages: The advantage of remote queue definitions is that they enable an application to put a message to a remote queue without having to specify the name of the remote queue or the remote queue manager, or the name of the transmission queue. This gives you location independence.
89. What happens if channel terminates when fast non-persistent messages are in transit?
If a channel terminates while fast, non-persistent messages are in transit, the messages are lost and it is up to the application to arrange for their recovery if required. If the receiving channel cannot put the message to its destination queue then it is placed on the dead letter queue, if one has been defined. If not, the message is discarded.
90. What happens when a message cannot be delivered?
Message-retry: If the MCA is unable to put a message to the target queue for a reason that could be transitory (for example, because the queue is full), the MCA has the option to wait and retry the operation later
Return-to-sender: If message-retry was unsuccessful, or a different type of error was encountered, the MCA can send the message back to the originator
Dead-letter queue: If a message cannot be delivered or returned, it is put on to the dead-letter queue (DLQ). You can use the DLQ handler to process the message
Recovery scenario –Disk Drive Full, damaged Queue manager object, Damaged single object, Automatic media recovery failure
MQ ensures that messages are not lost by maintaining records (logs) of the activities of the queue managers that handle the receipt, transmission, and delivery of messages
91. How to Process Messages from the Dead-letter-Queue?
We can Process the DLQ messages using runmqdlq command for sending messages to the destination Queues or target Queues. Use the runmqdlq command to start the dead-letter queue (DLQ) handler, which monitors and handles messages on a dead-letter queue.
runmqdlq QName QMgrName
Use the Dead-Letter-Queue-Handler to perform various actions on selected messages by specifying a set of rules that can both select a message and define the action to be performed on that message.
The runmqdlq command takes its input from stdin. When the command is processed, the results and a summary are put into a report that is sent to stdout.
92. Which field of the MQDLH structure contains a reason code that identifies why the message is on the DLQ?
Reason field
93. What is completion code(MQCC) and reason code(MQRC)?
Completion code gives the status of the current transaction it can be 0, 1, 2. 0- for Successful completion (MQCC_OK), 1- Warning (MQCC_WARNING), 2- call failed (MQCC_FAILED). Reason code is that which gives the reason for which the transaction fails it can be MQRC_NONE, MQRC_BACKED_OUT etc.
94. What is Correl ID?
This is a byte string that the application can use to relate one message to another, or to relate the message to other unit of work that the application is performing. The correlation identifier is a permanent property of the message, and persists across restarts of the queue manager
95. Explain commit and Back Out units of work?
When a program puts a message on a queue within a unit of work, that message is made visible to other programs only when the program commits the unit of work.
Commit: To commit a unit of work, all updates must be successful to preserve data integrity. If the program detects an error and decides that the put operation should not be made permanent, it can back out the unit of work.
Back Out: When a program performs a back out, WebSphere MQ restores the queue by removing the messages that were put on the queue by that unit of work. The way in which the program performs the commit and back out operations depends on the environment in which the program is running
96. What is BackoutCount (MQLONG)?
This is a count of the number of times that the message has been previously returned by the MQGET call as part of a unit of work, and subsequently backed out. BackoutCount is the number of times the application tried and failed to put the messages in the Queue
97. What is segmentation and explain segmentation Flag?
When a message is too big for a queue, an attempt to put the message on the queue usually fails. Segmentation is a technique whereby the queue manager or application splits the message into smaller pieces called segments, and places each segment on the queue as a separate physical message. The application that retrieves the message can either retrieve the segments one by one, or request the queue manager to reassemble the segments into a single message that is returned by the MQGET call.
98. What are Namelist? when do you use it?
A namelist is a WebSphere MQ object that contains a list of other WebSphere MQ objects. Typically, namelists are used
àBy trigger monitors, where they are used to identify a group of queues.
àNamelists are also used with queue manager clusters to maintain a list of clusters referred to by more than one WebSphere MQ object.
àThe advantage of using a namelist is that it is maintained independently of applications; it can be updated without stopping any of the applications that use it. Also, if one application fails, the namelist is not affected and other applications can continue using it. Namelists are also used with queue manager clusters to maintain a list of clusters referred to by more than one WebSphere MQ object
99. What are name services?
The name service is an installable service that provides support to the queue manager for looking up the name of the queue manager that owns a specified queue.
100. What is Local units of work (uses a single-phase commit process) and Global unit of Work (uses a two-phase commit process)?
Local unit of work: Units of work that involve only the queue manager are called local units of work. Syncpoint coordination is provided by the queue manager itself (internal coordination) using a single-phase commit process.
Use global units of work when you also need to include updates to resources belonging to other resource managers. Here the coordination can be internal or external to the queue manager uses a two-phase commit
101. How will we start a command server?
Depending on the value of the queue manager attribute, SCMDSERV, the command server is either started automatically when the queue manager starts, or must be started manually.
Start: Using strmqcsv saturn.queue.manager where saturn.queue.manager is the QM name
Display: dspmqcsv Stop: endmqcsv
102. When we use CCSID attribute of the ALTER QMGR command to change the CCSID of the QM what are the components that need to be restarted?
Stop and restart the queue manager, stop and restart command server (A command server processes command messages) and channel programs
103. What is a MQ Series Queue manager Configuration file (qm.ini)?
A queue manager configuration file (qm.ini) to effect changes for specific queue managers. There is one qm.ini file for each queue manager on the node. (A queue manager configuration file, qm.ini, contains config information relevant to a specific queue manager. There is one queue manager configuration file for each queue manager. The qm.ini file is automatically created when the queue manager with which it is associated is created. For example, the path and the name for a configuration file for a queue manager called QMNAME is:/var/mqm/qmgrs/QMNAME/qm.ini)
104. What is name transformation in naming a Queue manager Configuration File?
A qm.ini file is held in the root of the directory tree occupied by the queue manager. For example, the path and the name for a configuration file for a queue manager called QMNAME is: /var/mqm/qmgrs/QMNAME/qm.ini A directory name is generated based on the queue manager name. This process is known as name transformation.
105. What is a Websphere MQ configuration file (mqs.ini)?
Contains information relevant to all the queue managers on the node. It is created automatically during installation (The WebSphere MQ configuration file, mqs.ini, contains information relevant to all the queue managers on the node. It is created automatically during installation. The mqs.ini file for WebSphere MQ for UNIX systems is in the /var/mqm directory. It contains: v The names of the queue managers v The name of the default queue manager The location of the files associated with each of them)
106. How can we edit the configuration files?
Automatically using commands that change the configuration of queue managers on the node, Manually using a standard text editor
107. When security checks are made?
Connecting to the queue manager (MQCONN or MQCONNX calls), Opening the object (MQOPEN or MQPUT1 calls), Putting and getting messages (MQPUT or MQGET calls), Closing the object (MQCLOSE)
108. What is FFST?
First Failure Support Technology For MQSeries for UNIX systems, FFST information is recorded in a file in the /var/mqm/errors directory. These errors are normally severe, unrecoverable errors, and indicate either a configuration problem with the system or an MQSeries internal error. The files are named AMQnnnnn.mm.FDC, where: nnnnn Is the ID of the process reporting the error mm Is a sequence number, normally 0 When a process creates an FFST record, it also sends a record to syslog. The record contains the name of the FFST file to assist in automatic problem tracking
109. How to re-Create Damaged Objects Using Log files?
1.Rcdmqimg: Use this command to write an image of an object, or group of objects, to the log for use in media recovery. This command can only be used when using linear logging.
Use the associated command rcrmqobj to recreate the object from the image.
2.Rcrmqobj: Use this command to recreate an object, or group of objects, from their images contained in the log. This command can only be used when using linear logging
Use the associated command, rcdmqimg, to record the object images to the log.
Types of recovery:
Restart recovery: When you stop WebSphere MQ in a planned way.
Crash recovery: When a failure stops WebSphere MQ.
Media recovery: To restore damaged objects.
110. What are the locations and files of the Error Logging?
MQ Series Level Errors
C:ProgramFilesIBMWebSphere MQerrorsà AMQERR01.LOG, AMQERR02.LOG, AMQERR03.LOG
Qmanager Level errors
C:ProgramFilesIBMWebSphere MQQmgrserrorsà AMQERR01.LOG, AMQERR02.LOG, AMQERR03.LOG
errors
111. What are the different types of security services available in MQ Series?
Identification & Authentication
Access control à The access control service protects critical resources in a system by limiting access only to authorized users and their applications.
Confidentiality à The confidentiality service protects sensitive information from unauthorized disclosure
Data integrity à The data integrity service detects whether there has been unauthorized modification of data. There are two ways in which data might be altered: accidentally, through hardware and transmission errors, or because of a deliberate attack, Non-repudiation.
Commands For Authorization:
1.setmqaut: Command used to change the authorizations to a profile, object or class of objects. Authorizations can be granted to, or revoked from, any number of principals or groups.
2.dspmqaut: Command to display the current authorizations to a specified object. If a user ID is a member of more than one group, this command displays the combined authorizations of all the groups.
Only one group or principal can be specified.
3.dmpmqaut: Command to dump the current authorizations to a specified object.
112. What are the different methods handled by MQ Series for securing a message?
Cryptography Message digests
Digital signatures Digital certificates
Public Key Infrastructure (PKI)
113. What is Cryptography, Why and where it is used in MQ Series?
Cryptography is the process of converting between readable text, called plaintext, and an unreadable form, called cipher text.
The sender converts the plaintext message to cipher text. This part of the process is called encryption (sometimes encipherment).The cipher text is transmitted to the receiver. The receiver converts the cipher text message back to its plaintext form. This part of the process is called decryption (sometimes decipherment).
The conversion involves a sequence of mathematical operations that change the appearance of the message during transmission but do not affect the content. Cryptographic techniques can ensure confidentiality and protect messages against unauthorized viewing (eavesdropping), because an encrypted message is not understandable. Digital signatures, which provide an assurance of message integrity, use encryption techniques.
114. What is a Message Digest, Digital Signature and Digital Certificate?
Message digest: Is also known as a Message Authentication Code (MAC), because it can provide assurance that the message has not been modified. The message digest is sent with the message itself. The receiver can generate a digest for the message and compare it with the sender’s digest. If the two digests are the same, this verifies the integrity of the message. Any tampering with the message during transmission almost certainly results in a different message digest.
Digital signature: Is formed by encrypting a particular representation of a message the encryption uses the private key of the signatory and, for efficiency, usually operates on a message digest rather than the message itself. Digital signatures vary with the data being signed, unlike handwritten signatures, which do not depend on the content of the document being signed. If two different messages are signed digitally by the same entity, the two signatures differ, but both signatures can be verified with the same public key, that is, the public key of the entity that signed the messages.
Digital certificates: Provide protection against impersonation, because a digital certificate binds a public key to its owner, whether that owner is an individual, a queue manager, or some other entity. Digital certificates are also known as public key certificates, because they give you assurances about the ownership of a public key when you use an asymmetric key scheme.
115. What is a Secure Sockets Layer (SSL), where it is used?
The Secure Sockets Layer (SSL) provides an industry standard protocol for transmitting data in a secure manner over an insecure network. The SSL protocol is widely deployed in both Internet and Intranet applications. SSL defines methods for authentication, data encryption, and message integrity for a reliable transport protocol, usually TCP/IP.
116. What are Cipher Suites and Cipher Specs in SSL?
Cipher Suite: Is a suite of cryptographic algorithms used by an SSL connection. A suite comprises three distinct algorithms. The key exchange and authentication algorithm, used during the SSL handshake. The encryption algorithm, used to encipher the data.The MAC (Message Authentication Code) algorithm, used to generate the message digest.
Cipher Spec: Identifies the combination of the encryption algorithm and MAC algorithm. Both ends of an SSL connection must agree the same CipherSpec to be able to communicate.
117. What are the steps to be followed in working with SSL on an UNIX environment?
1.Setting up a key repository 2.Working with a key repository
3.Obtaining personal certificates 4.Managing digital certificates
5.Configuring for cryptographic hardware 6.Mapping DNs to user IDs
7.Adding personal certificates to a key repository
118. What are Websphere MQ installation naming consideration?
Ensure that the machine name does not contain any spaces. If you insatall in such a machine you cannot create and Queue managers. Names for userId and group must no longer that 20 characters
119. What is CCSID?
This defines the character set of character data in the message. If you want to set this character set to that of the queue manager, you can set this field to the constant MQCCSI_Q_MGR or MQCCSI_INHERIT. When you get a message from a queue, compare the value of the CodedCharSetId field with the value that your application is expecting. If the two values differ, you might need to convert any character data in the message or use a data-conversion message exit if one is available
Channel: Communication Paths between Queue Managers.
Tell Some Default objects: (43 objects)
Queues: SYSTEM.DEFAULT.LOCAL QUEUE SYSTEM.DEFAULT.MODEL.QUEUE
SYSTEM.DEFAULT.REMOTE.QUEUE SYSTEM.DEFAULT.ALIAS.QUEUE
SYSTEM.DEFAULT.INITIATION.QUEUE SYSTEM.DEAD.LETTER.QUEUE
Channel Queues: SYSTEM.CHANNEL.INITQ SYSTEM.CHANNEL.SYNCQ
Admin Queues: SYSTEM.ADMIN.ACCOUNTING.QUEUE
SYSTEM.ADMIN.ACTIVITY.QUEUE
SYSTEM.ADMIN.COMMAND.QUEUE
SYSTEM.ADMIN.STATISTICS.QUEUE
SYSTEM.ADMIN.TRACE.ROUTE.QUEUE
Channels: SYSTEM.AUTO.RECEIVER SYSTEM.AUTO.SVRCONN
SYSTEM.DEF.CLUSRCVR SYSTEM.DEF.CLUSSDR
SYSTEM.DEF.RECEIVER SYSTEM.DEF.REQUESTER
SYSTEM.DEF.SENDER SYSTEM.DEF.SERVER
SYSTEM.DEF.SVRCONN
Listeners: SYSTEM.DEFAULT.LISTENER.TCP
SYSTEM.DEFAULT.LISTENER.SPX
SYSTEM.DEFAULT.LISTENER.NETBIOS
SYSTEM.DEFAULT.LISTENER.LU62
Process Def: SYSTEM.DEFAULT.PROCESS
Services: SYSTEM.DEFAULT.SERVICE SYSTEM.BROKER
Name Lists: SYSTEM.DEFAULT.NAMELIS
Event Queues: SYSTEM.ADMIN.CHANNEL.EVENT
SYSTEM.ADMIN.LOGGER.EVENT
SYSTEM.ADMIN.PERFM.EVENT
SYSTEM.ADMIN.QMGR.EVENT
120. What are advantages of creating Aliases? Why do we create Alias?
When sending messages: Re mapping the queue-manager name when sending messages, Altering or specifying the transmission queue when sending messages, Determining the destination when receiving messages, Using a queue manager as a gateway into the cluster. Gives different application different levels of access authority to the target Queue Allows different applications to work with the same queue in different way Simplifies maintenance, migration and workload balance
121. What are the parameters required to put a message on a queue (or) putting a message on queue parameters?
Requires a Connection handler (Hconn), a Queue handler (Hobj), a description of the message that you want to put on the queue (MQMD), Control information, message length, the message data itself
122. Getting messages for a queue?
You can remove a message from the queue so that other programs can no longer see the message, you can copy a message, leaving the original message on the queue. This is known as browsing. You can remove the message once you have browsed it. In both cases, you use the MQGET call, but first your application must be connected to the queue manager, and you must use the MQOPEN call to open the queue
123. What happens when a message is put in a PUT-INHIBITED Queue?
The messages are put in the dead letter queue. If a channel is unable to put a message to the target queue because that queue is full or put inhibited, the channel can retry the operation a number of times (specified in the message-retry count attribute) at a given time interval (specified in the message-retry interval attribute). Alternatively, you can write your own message-retry exit that determines which circumstances cause a retry, and the number of attempts made. The channel goes to PAUSED state while waiting for the message-retry interval to finish
124. What is syncpoints?
Syncpoint coordination is the process by which units of work are either committed or backed out with data integrity. The decision to commit or back out the changes is taken, in the simplest case, at the end of a transaction. However, it can be more useful for an application to synchronize data changes at other logical points within a transaction.
These logical points are called syncpoints (or synchronization points) and the period of processing a set of updates between two syncpoints is called a unit of work
125. In-doubt Channels? How will you resolve this ?
An in-doubt channel is a channel that is indoubt with the remote channel about which messages has been sent and received
Solution: We can do Commit or Rollback the messages which are in doubt.
Scenarios:
Queue open failed?
*Reason: On an MQCONN or MQCONNX call, the value specified for the QMgrName parameter is not valid or not known
*Resolution: we must correct the configuration information
Queue not found?
*Reason Code 2085 MQRC_UNKNOWN_OBJECT_NAME
*Resolution: check for the Queue name in the QManager if not found define it.
Messages sent to DLQ?
*Reason code: 2218 Message too big for Channel
*Investigation: Examine the contents of the dead-letter queue. Each message is contained in a structure that describes why the message was put to the queue, and to where it was originally addressed. Also look at previous error messages to see if the attempt to put messages to a dead-letter queue failed.
*Resolution: change the channel size as required, if the channel is a cluster channel then do a REFRESH cluster so that it will reflect to the other QM’s, then reprocess the message
Message piling(FULL) up in a Queue?
*Investigation: Check for the log files (/var/mqm/qmgrs/<qmgrname>/errors/*.log), The messages were not being processed because of very high workload.
*Resolution: SSL
*Authentication failure:
The SSL client does not have a certificate
A certificate has expired or is not yet active
A certificate is not supported
A certificate is corrupted
May be ssl version upgradation
Channel refuses to run or channel retry?
*Reason: A mismatch of name between a sending and receiving channels, Incorrect channel type specified, A receiver channel might be in stopped state, the connection might not be defined Correctly, there might be a problem with communication software.
*Resolution: Alter the Queue and REFRESH the cluster to reflect the change in the information stored in the partial repository
126. Explain Handling messages more than 4 MB?
Increase the Queue and Queue manager MaxMsgLength attributes, Use segmented messages (Messages can be segmented by either the application or the Queue manager), use reference message.
127. What is DQM: DISTRIBUTED QUEUEING MANAGEMENT
Setuping & Controlling of Message Channel in Message Queuing for Q Managers on Distributed Systems.
128. What is the SSL Version used in WMQ5.3?
Version 3.0
129. NPMSPEED FAST. What happens if the channel goes down?
Nonpersistent message speed (NPMSPEED) It is used to specify the speed at which nonpersistent messages are to be sent. It can take on two values either ‘normal’ or ‘fast’. The default is ‘fast’, which means that nonpersistent messages on a channel are not transferred within transactions. Non persistent messages are lost if there is a transmission failure or if the channel stops when the messages are in transit.
130. What is SSL?
Secure Sockets Layer (SSL) is a protocol designed to allow the transmission of secure data over an insecure network. SSL makes use of digital certificates to enable authentication of the partner. It also uses encryption to prevent eavesdropping and hash functions to enable detection of tampering. It can be used with both MCA channels for queue manager to queue manager communication and MQI channels for client applications connecting to a queue manager
131. What are the algorithms in SSL?
A CipherSuite is a suite of cryptographic algorithms used by an SSL connection. A suite comprises three distinct algorithms:
The key exchange and authentication algorithm, used during the SSL handshake
The encryption algorithm, used to encipher the data
The MAC (Message Authentication Code) algorithm, used to generate the message digest
132. What is Triggering?
Ans: Web Sphere MQ enables you to start an application automatically when certain conditions on a queue are met. For example, you might want to start an application when the number of messages on a queue reaches a specified number. This facility is called triggering
133. How many ways of Triggering?
EVERY: A trigger event occurs every time that a message arrives on the application queue. Use this type of trigger if you want a serving program to process only one message, then end.
FIRST: A trigger event occurs only when the number of messages on the application queue changes from zero to one. Use this type of trigger if you want a serving program to start when the first message arrives on a queue, continue until there are no more messages to process, then end.
DEPTH: A trigger event occurs only when the number of messages on the application queue reaches the value of the TriggerDepth attribute.
134. What are the Trigger types available Explain?
a.Application triggeringb.Channel Triggering
a) In the case of application triggering the components are Application queue: This is the message queue associated with an application Process: A process definition defines the application to be used to process messages from the application queue. Initiation queue: The queue manager moitors the application queue. If the trigger type of the application queue is set to Every then whenever a message is put to the application queue, the q manager looks into the process definition and puts a message having the application name and other details to the initiation queue Trigger monitor: The trigger monitor gets the trigger message from the initiation queue and starts the program specified.
b) For channel triggering the transmission queue is monitored and when messages are put in the transmission queue, the q manager puts a message in the channel initiation queue. The channel initiator is the program which monitors the initiation queue and starts the sender MCA. For the message to reach the target queue, the channel listener has to be running in the target queue manager
Channel Triggering Conditions:
· Trigger ON
· Trigger type(first every depth)
· Trigger data(channel name which is to be fired)
· Initiation queue(SYSTEM.CHANNEL.INITQ)
Channel Triggering Background process:
1. The local queue manager places a message from an application or from a message channel agent (MCA) on the transmission queue.
2. When the triggering conditions are fulfilled, the local queue manager places a trigger message on the initiation queue.
3. The long-running channel initiator program monitors the initiation queue, and retrieves messages as they appear.
4. The channel initiator processes the trigger messages according to information contained in them. This information may include the channel name, in which case the corresponding MCA is started.
5. The channel listener running in the target q mgr starts the receiving MCA
Application Triggering Conditions:
§  Trigger ON
§  Trigger type (first every depth)
§  Initiation queue (SYSTEM.DEFAULT.INITIATION.QUEUE our own defined local queue)
§  Process (NOTEPAD)
DEFINE QLOCAL (LQ) TRIGGER TRIGTYPE (EVERY) INITQ (IQ) PROCESS (NOTEPAD).
DEFINE PROCESS (NOTEPAD) APPLICID (NOTEPAD.EXE) APPLTYPE (WINDOWS)
Runmqtrm –m QM1 –q IQ
BACKGROUND PROCESS:-
1. When ever the message comes to triggered local queue, queue manager will fire trigger message with information called trigger type and the process definition (application which is to be triggered) in to the initiation queue (IQ) (our own queue).
2. At the initiation queue a long running time program called trigger monitor will be watching (monitoring) the initiation queue.
3. Whenever the trigger message occurs in the initiation the trigger monitor will pick the information and starts the application which is defined in the process.
135. What is a Trigger monitor?
A trigger monitor is a continuously – running program that serves one or more initiation queues. When a trigger message arrives on an initiation queue, the trigger monitor retrieves the message. The trigger monitor uses the information in the trigger message. It issues a command to start the corresponding application/channel
136. What is the command used for the running trigger monitor?
Ans: On Server side: runmqtrm -m QMName -q Initiation QueueName
On Client side: runmqtmc -m QMName -q Initiation QueueName
137. What is LISTENER:-
§  It is a service of MQ series
§  Every Queue Manager will have a listener defined with a unique port number.
§  (Default port number is:-1414)
§  Listener acts as a mediator between external application or queue managers connecting to the queue manager.
§  To contact the queue manager we should approach through Listener.
138. Is it possible to retrieve a message from a Dead letter Queue? If possible how?
§  YES, U have to run the RULESTABLE file to retrieve the messages on the dead letter queue. This proces can be handled by DEAD LETTER HANDLER.
§  we have two commands to retrieve messages from DLQ. Action(fwd) fwdq fwdqm. then that DLQ message forward to this particular q.ACTION(RETRY)
§  rules table containg below code: RUNMQDLQ
INPUTQ(DEAD.LETTER.QUEUE) INPUTQM(QMGR NAME)
REASON(2087) ACTION(FWD) FWDQ(QUEUENAME) FWDQM(QMGR NAME)
§  If the dead letter queue name is system.dead.letter.queue,By using the below command we can retrive the message from the dead letter queue.
amqsget system.dead.letter.queue queuemanger
§  General Command to replay messages in the dead queue is
runmqdlq Qname QMname
139. What the differentCHANNEL STATES which ensures the stage to stage operation of a MQ Channel?
Channel states are of 5 types
§  Running
§  Inactive
§  Retrying
§  Stopped
§  Paused(receiver channel)
1. RUNNING: – before going to Running state the status will be initialization and binding
Initialization:-channel will initiate the listener
Binding:-sender channel binds with receiver, after that it
Goes to running state
2.INACTIVE:-we have one attribute called disconnect interval (DISCINT) with 6000 milli seconds (default) and it can be changed as of our convenience. If the channel is idle for a particular period defined in disconnect interval, the channel will go to inactive state.
3. RETRYING:-the channel goes to retrying state if the other side queue manager will not be available, network issue, may be listener not running, may be receiver channel is in pause state, and may be the receiver channel transportation type is different…. Etc.
4.PAUSED STATE:- this state is applicable for receiver (RCVR) channel. Paused state occurs when the receiving queue is full.
Note:-
1. If we do any changes to the channels, listeners, queue manager, to effect the changes we need to stop and then start them.
2. Before starting a channel listener should be in active / running, we can check by pinging the channel.
3. Ping is used to check whether the receiver is in active state or not.
Syntax: – PING CHANNEL (CHANNEL NAME)
140. List some of MQ Listener operations:
To Create New Listener:
def listener(LISTENER_NAME) trptype(tcp) port(PORT_NUMBER)
To Start Listener:
Start listener (LISTENER_NAME)
To Stop Listener:
stop listener (LISTENER_NAME)
To display Listener status:
display lsstatus (LISTENER_NAME)


DataPower:

Before you start

About this tutorial

This tutorial gives you the basic steps of how to develop a Data Web Service with IBM Data Studio and shows you how to deploy it on an IBM WebSphere DataPower XI50 Integration Appliance.

Motivation

IBM Data Studio Developer provides an easy way to expose database data as a service. While the service definition and artifact generation are important to understand, if you want to get serious about SOA, you must consider the problems of security, performance, governance and monitoring. Those issues can many times not be solved with the service definition, but they require additional infrastructure and configuration steps. Even though it's possible to create an "enterprise-ready" SOA environment with traditional J2EE application servers, it might not always be the best approach for exposing enterprise data as services.
IBM WebSphere DataPower provides a distinct and competitive hardware, software, and infrastructure stack which allows you to address many of the SOA challenges mentioned earlier. Now, this infrastructure can be leveraged for Data Web Services with IBM Data Studio Developer 1.2 (or later) supporting WebSphere DataPower XI50 Integration Appliance as runtime environment and the DataPower enhancements for database access in the 3.7.1 firmware release.
Using WebSphere DataPower XI50 Integration Appliance as the hosting environment for Data Web Services allows you to leverage the superior support of network protocols. This gives a wide variety of clients the ability to talk to DB2 — without even being database-aware. Furthermore, all the other DataPower advantages like security, XML hardware acceleration for parsing, schema validation and XSLT processing can be used.

Objectives

In this tutorial, you learn what steps are necessary to create Data Web Service runtime artifacts for the WebSphere DataPower XI50 Integration Appliance with IBM Data Studio Developer 2.1. Furthermore, see how to create a DB2 data source on DataPower, how to transfer the service artifacts from Data Studio to DataPower, how to create an HTTP-based binding configuration using the XSL Accelerator, and how to create the WSDL/SOAP-based service binding using the Web Service Proxy.

Prerequisites

This tutorial is written for users who have basic knowledge of Web Services, databases, IBM Data Studio and IBM WebSphere DataPower Appliances.

System requirements

To run the examples in this tutorial, you need IBM Data Studio Developer 1.2 or 2.1 as well as DB2 version 8 or higher with the sample database. For service deployment, you need an IBM WebSphere DataPower Integration Appliance XI50 with firmware level 3.7.1 and the ODBC package.

Overview

In the tutorial you will:
  1. Create a Data Web service and DataPower runtime artifacts with IBM Data Studio Developer
  2. Create a DB2 data source on DataPower
  3. Upload the generated service artifacts to DataPower
  4. Configure an XSL Accelerator for HTTP POST XML binding
  5. Configure an XSL Accelerator for HTTP GET binding
  6. Configure a WS-Proxy for SOAP over HTTP binding

How it works

Introduction to IBM WebSphere DataPower SOA appliances

IBM WebSphere DataPower SOA Appliances are purpose-built, easy-to-deploy network devices that simplify, secure, and accelerate your XML and Web services deployments while extending your SOA infrastructure. These appliances offer an innovative, pragmatic approach to harness the power of SOA. By using them, you can simultaneously use the value of your existing application, security, and networking infrastructure investments.
A DataPower appliance can take on different roles in an SOA environment . The appliance can act as a simple XSL accelerator and transformation engine, be used as a Firewall and Security device up to a Multi-Protocol Gateway, and even function as an Enterprise Service Bus (ESB).
This tutorial focuses on DataPower as a transformation and protocol conversation engine to transform Web service requests into database calls and the result back into service responses to implement a data access service. Figure 1 shows this transformation process:
Figure 1. Transforming requests into database calls
Transforming requests into database calls
A transformation is defined by a set of rules which determine how an input is mapped to an output. A very common format for Web service is XML. XSL is a way to define such kind of mapping rules for XML-based inputs. DataPower provides first class support for XSL processing with it's purpose-build XML and XSL processing stack. Therefore, XSL is used to transform service requests into database calls and back. DataPower provides the XSL <dp:sql-execute> extension element to perform the database access operations.
To learn more about the IBM WebSphere DataPower appliance family check out the IBM WebSphere DataPower SOA Appliances Part 1 (seeResources).

Use IBM Data Studio Developer to generate DataPower service artifacts

While the information provided so far gives you the ability to write the appropriate XSL scripts by hand to implement the transformation mappings necessary for a data access service, this is a very time-consuming and error-prone task. IBM Data Studio Developer provides a quick and easy way to expose database operations as Web services. After a service has been defined, artifacts for a specific runtime environment can generated. Traditionally, those are artifacts for a J2EE/JavaEE-based application server. But now it's possible to also generate service artifacts for DataPower. The generated artifacts consist of the WSDL file of the service as well as a set of XSL scripts defining the mapping between service requests and database calls. Those artifacts can now be deployed on a DataPower XI50 appliance. Figure 2 shows how this process works:
Figure 2. Generate DataPower artifacts with Data Studio Developer
Generate DataPower artifacts with Data Studio Developer
Later, this tutorial uses a simple example to show you how to use Data Studio Developer to create a Data Web Service, how to generate the artifacts for DataPower, and how to deploy the service on a DataPower XI50 appliance.

Deploy the Web service

After the artifacts have been generated, it's time to deploy them at the DataPower appliance. DataPower provides different configuration categories for different kind of services as Figure 3 illustrates.
Figure 3. Various service configurations available on DataPower
Various service configurations available on DataPower
For simplicity, this tutorial only uses the XSL Accelerator to implement HTTP-based bindings and the Web Service Proxy to implement the WSDL/SOAP-based binding for the service. The generated artifacts can also be used with different configurations, for example, using MQ as message transport, or putting additional security like authentication/authorization or secure communication via SSL around it. The most common way to configure a DataPower appliance is through the WebGUI Web interface. All descriptions and screen shots in this tutorial are based on the DataPower WebGUI.

Configuration in DataPower

DataPower has a completely metadata-driven configuration approach. All service configurations have a front-end and back-end configuration as well as a processing policy which defines one or more processing rules. Depending on the selected service configuration, some of the configurations might be masked. For example, an XSL Accelerator configuration does not have an explicit back-end setup.
Figure 4. DataPower configuration
Policies, Rules, Front- and Backend

Front end

The front end defines the "client-facing" part of a configuration. It's main purpose is to configure protocol-specific settings like TCP/IP port, encryption, transport protocol versions, properties, and the like.

Back end

The back end defines the configuration of the service and server the processed request should be sent to. It usually contains protocol-specific settings and addresses information of the back-end service.
Note: A database is not considered to be a back-end system for DataPower. In case of Data Web services, the requests are terminated at the DataPower appliance and not sent to a back-end system. Instead of providing a real back-end system with the configurations, we use a loopbackconfiguration which turns an incoming request into a response. The database access happens inside XSL scripts using the <dp:sql-execute> extension element.

Policy

policy defines a set of one or more processing rules. Every service configuration on DataPower contains a policy setup.

Rule

rule defines a sequence of actions (like matching, validation, transformation, routing, and so on) which get applied to a request or response message. For example, the XSL artifacts generated by Data Studio are added to such a processing rule in form of a transform action.

Create a simple Web service and generate DataPower artifacts

This section shows how easy it is to create a simple Data Web Service for the DB2 sample database using IBM Data Studio Developer 2.1. You'll create a Web service containing a SELECT, UPDATE, INSERT and DELETE statement as well as a stored procedure call.

Step 1. Set up the Data Development Project

First, create a new data development project in Data Studio and name it DataPowerSamples. Base this project on a connection to a DB2 sample database. Your DataPower XI50 appliance also needs to be able to connect to this database. The DataPower setup is described later in this document.
Figure 5. The data development project
The Data Development Project
Ensure that you are using the Data Project Explorer in the Data perspective in order to see the SQL Scripts and Web Services folders.

Step 2. Create SQL statements

Now it's time to develop the SQL statements that you'll expose as Web Service operations.

Operation 1: getEmployeeById (SQL SELECT)

The first statement is a simple SQL SELECT to retrieve an employee record from the employee table by a given ID.
To create a statement, right-click on the SQL Scripts folder and select New -> SQL or XQuery script.
Figure 6. New SQL or XQuery statement
New SQL or XQuery Statement
Enter getEmployeeById as the statement name and keep the SQL editor setting. Click Finish. Alternatively, you can also use the SQL builder to assemble the statement.
Enter the following statement string in the editor window and save the statement:
SELECT * FROM EMPLOYEE WHERE EMPLOYEE.EMPNO = :empno
Now the statement should appear under the SQL Scripts folder like this:
Figure 7. SQL Scripts folder with SELECT statement
SQL Scripts folder with SELECT statement

Operation 2: updateEmployeeSalary (SQL UPDATE)

Repeat the same step as shown in the getEmployeeById example but use the name updateEmployeeSalary and the following SQL statement:
UPDATE EMPLOYEE SET SALARY = :newSalary WHERE EMPNO = :empno

Operation 3: insertEmployee (SQL INSERT)

Repeat the same step as shown in the getEmployeeById example but use the name insertEmployee and the following SQL statement:
INSERT INTO EMPLOYEE
  VALUES (:empno, :firstName, :midInitial, :lastName, :workDepartment,
    :phone, :hireDate, :job, :educationLevel, :sex, :birthDate,
    :salary, :bonus, :commission)
After creating all the statements, the SQL Scripts folder should look similar to the one in Figure 8.
Figure 8. SQL Scripts folder with all statements
SQL Scripts folder with all statements

Operation 4: BONUS_INCREASE (stored procedure)

The DB2 sample database already comes with the BONUS_INCREASE stored procedure. You can locate the procedure by navigating to theStored Procedure folder under the appropriate schema in the database connection in the Data Source Explorer view:
Figure 9. Data Source Explorer with BONUS_INCREASE stored procedure
Data Source Explorer with BONUS_INCREASE stored procedure
Now you can start assembling the Web service.

Step 3. Create the Web service

Create a new Web service by selecting the DataPowerSamples project and right-clicking on the Web services folder. Select New Web Service ....
In the New Web Service dialog you can specify a Web service name and a namespace URI. In this instance, use DataPowerSampleService as the Web service name and http://ibm.com/example/DataPowerDWS as the namespace URI as Figure 10 illustrates.
Figure 10. New Web service
New Web Service
Click Finish to create the service.
To add the SQL statements as Web service operations to your service, simply drag and drop them from the SQL Scripts folder on to theDataPowerSampleService Web service:
Figure 11. Drag and drop SQL statements
Drag and Drop SQL Statements
To add the BONUS_INCREASE stored procedure to the Web service, drag and drop it from the Stored Procedures folder into the Web Service:
Figure 12. Drag and drop the BONUS_INCREASE stored procedure
Drag and Drop the BONUS_INCREASE stored procedure
Now all the operations are added to the service, and you can do some fine tuning.
Select DataPowerSampleService and double-click on the getEmployeeByIdoperation to bring up the Edit Operation dialog. Check the Fetch only single row for queries option since you know that this query only returns one row at a time - an employee record - and click Finish. This setting simplifies the XML structure of the service response.
Figure 13. Single row fetch
Single Row Fetch
The BONUS_INCREASE procedure returns a result set. Since the DB2 catalog does not contain result set metadata for stored procedures, the generated XML schema for the response message will, by default, contain only a very generic schema definition using the <xsd:any> element. The Data Web Services tooling lets you adjust the result set schema by executing the procedure and taking the schema information from the returned result set metadata. To do that, double-click on the BONUS_INCREASE operation in the DataPowerSampleService to bring up the Edit Operation dialog. Click Next to get to the Generate XML Schema for Stored Procedure dialog and click Generate.
Figure 14. Generate XML schema for the stored procedure
Generate XML Schema for Stored Procedure
Since the BONUS_INCREASE procedure has input parameters, a new dialog appears where you can specify input values. Specify 1.1 asP_BONUSFACTOR and 100000 as P_BONUSMAXSUMFORDEPT and click OK. This executes the procedure, and the metadata from the returned result set can be analyzed. After the execution, you can click Finish on the Edit Operation dialog. That concludes the fine tuning of the Web service operations.
Figure 15. Execute stored procedure
Execute Stored Procedure

Step 4. Build service artifacts

Up to this point, the tutorial has not discussed the difference between a service to be deployed in a J2EE environment or one to be deployed on DataPower. In this step, you will create the service runtime artifacts, so you have to decide which runtime environment to use.
Right-click on the DataPowerSampleService and select Build and Deploy ....
Select DataPower as the Web server type. Ensure that REST and SOAP bindings are selected and provide a data source name for the artifact.dataSource property in the Parameters list. The name needs to match the data source, which you will set up on DataPower to connect to the DB2 sample database. The DB2 data source setup on DataPower is shown in the next section. Click Finish to generate the artifacts.
Figure 16. Deploy Web service
Deploy Web Service
The artifacts are written to a sub-directory inside the Data Development project. Since the Data perspective does not show the created folder, you need to open the Navigator view as shown:
Figure 17. Navigator view
Navigator View
The DataPowerSamples project contains a DataServerWebServices folder. In there you will find one directory for every Web service defined in your Data Development Project. A Web service folder contains the service metadata information in the .metadata folder. In case DataPower was selected as the runtime environment, you will also find an artifacts folder. This folder contains all the necessary service implementation files which need to be copied to DataPower appliance.
Note: If you don't see the artifacts folder, you need to refresh the project by right-clicking the DataPowerSamples project and select Refresh from the menu as Figure 18 illustrates.
Figure 18. Refresh project
Refresh Project
This concludes the work in Data Studio. Now it's time to take care of the DataPower configuration.

Set up a DB2 data source on DataPower

This section shows you how to set up a DB2 data source on DataPower using the DataPower WebGUI.
  1. Log into the DataPower WebGUI using your preferred domain.
  2. To create or modify an SQL data source, expand NETWORK in the left-hand menu and click on SQL Data Source. Click Add to create a new SQL data source as Figure 19 indicates.
    Figure 19. DataPower SQL Data Source setup
    DataPower SQL Data Source setup
  3. In the next dialog, provide all the necessary connection information. For the Name, enter DB2LUW95. Make sure the Admin State is set toenabled or DataPower will not connect to the database. Select DB2 (version 9) as the Database type. The version 9 driver also works with older DB2 versions. Enter your connection username and password. Enter the Data Source ID (sample), the Data Source Host (9.48.109.209), and Data Source Port (50000). Set Limit Returned Data and Allow Red-only Access to off. Set the Maximum Connections to 10.
    Figure 20. Configure a database connection
    Configure a Database connection
  4. Click Apply to save your settings. To persist configuration settings beyond DataPower shutdowns, click the Save Config button in the upper right corner.
  5. The data source will be up and running (and ready for use) when the Op-State says up (Figure 21). You can refresh the view by clicking the Refresh List link -- it can take some time until the data source is connected to the database.
    Figure 21. Operation state is up
    Operation state is up

Set additional driver parameters

The Data Source Configuration Parameters tab allows you to specify additional connection parameters. In DB2, you can define the CLI/ODBC configuration keywords here. More information on the DB2 CLI/ODBC configuration keywords can be found in The DB2 9.5 Information Center.
This concludes the data source setup. Now you need to copy the generated artifacts over to the DataPower box.

Copy artifacts to DataPower

In order to keep the setup simple, copy all artifacts into the local DataPower file system. DataPower also supports remote repositories like IBM's WebSphere Service Registry and Repository (WSRR).

Create a Directory

Since every service consists of multiple files, it's recommended to create a sub-directory for every service. Log into the DataPower WebGUI using your preferred domain. Select Files and Administration > File Management as shown below.
Figure 22. File Management icon
File Management icon
Click on the Actions... link next to the local: directory and select Create Subdirectory.
Figure 23. Create subdirectory
Create Subdirectory
Provide the name "DataPowerSampleService" as the subdirectory name and click Confirm Create to create the directory.
Figure 24. Provide new directory name
Provide new directory name
Finally click Continue to get back to the directory overview.

Copy files

Expand the local:/DataPowerSampleService directory and select the Action... link next to the DataPowerSampleService directory. Click theUpload Files
Figure 25. Upload files
Upload Files
Browse to the Web services' artifacts directory inside the Data Studio workspace and select each of the generated files. To add a new file to the upload list, click the Add button. After all files are selected, click Upload to transfer them to the DataPower box.
Figure 26. Upload generated files
Upload generated Files
Finally click Continue to get back to the directory overview.
Now that you understand how to copy the artifacts, you can see how to configure the service.

Configure and Test an XSL accelerator for HTTP POST XML binding

This step shows how to configure a simple XSL Accelerator to implement the HTTP POST (XML) binding for the DataPowerSampleService. This binding takes XML request messages following the request message schema as defined in the WSDL document. The message format is very similar to the SOAP message except that there are no wrapping SOAP body and SOAP envelope tags around the XML - just the plain message payload.

The XSL Accelerator set up

  1. Login to the DataPower WebGUI and click on the XSL Accelerator symbol:
    Figure 27. XSL Accelerator
    XSL Accelerator
  2. Click on the Add Wizard button
  3. Select XSL Proxy Service and click Next (Figure 28).
    Figure 28. XSL Proxy Service
    XSL Proxy Service
  4. Give the XSL Proxy the name DataPowerSampleService and click Next.
    Figure 29. Name the Proxy Service
    Name the Proxy Service
  5. Select loopback-proxy as proxy type and click Next.
    Figure 30. Loopback Proxy Service
    Loopback Proxy Service
  6. On the next screen, keep 0.0.0.0 as the device address. Also, keep the provided device port (this is the TCP/IP port the request will later be sent to - this port needs to be remembered). Click Next.
    Figure 31. Port Selection
    Port Selection
  7. Select the DataPowerSampleService_xml.xsl file from the store. Click Next.
    Figure 32. Select XSL file
    Select XSL file
  8. Verify your settings, and click Commit to create the XSL Proxy.
    Figure 33. Confirm XSL Proxy settings
    Confirm XSL Proxy settings
  9. Click Commit.
  10. Click on View Policy to modify the policy.
    Figure 34
    Figure 34
  11. If you want to re-use this XSL Accelerator for multiple bindings, you have to tune the URL matching rule. Double-Click on the URL matching rule symbol as Figure 35 shows.
    Figure 35. Policy Rule
    Policy Rule
  12. Create a new matching rule by clicking on the "+" button:
    Figure 36. Create New Match Rule
    Create New Match Rule
  13. Give the rule the name DataPowerSampleService_POST_XML. Click on the Matching Rule tab, select Add and enter the URL pattern/DataPowerSampleService/postXml/*.
    Figure 37. Define URL pattern
    Define URL pattern
    Click Done to create the new matching rule.
  14. Click the Apply Policy button in the policy window and close the policy window.
Now a client can send an HTTP POST request with an XML request message to DataPower to trigger the execution of a Web service operation. The HTTP POST binding requires to send an XML request message to the service. Data Studio Developer can be used to generate an appropriate XML request message for a Data Web service operation.

Generate an XML request message with Data Studio Developer

The Data Web services tooling allows you to generate an XML schema document for every operation. The XSD document contains the XML schema description for the operation's input and output message.
  1. Right-click on the Web service operation and select Manage XSLT... from the context menu.
    Figure 38. Select Manage XSLT...
    Select Manage XSLT...
  2. Click the Generate... button in the Configure XSL Transformations dialog.
    Figure 39. Generate the XML schema
    Generate the XML schema
  3. This brings up the Save As dialog. You may change the suggested file name and location, but it's recommended to keep it as is. Hit Save to save the XML schema to the project. And hit Finish in the Configure XSL Transformations dialog.
    Figure 40. Save the XML schema file
    Save the XML schema file
  4. Right-click on your DataPowerSamples project and select Refresh from the context menu.
    Figure 41. Refresh the project
    Refresh the project
  5. The generated XSD file should now appear inside the XML Schema folder of the project.
    Figure 42. Generated XML Schema file in project
    Generated XML Schema file in project
  6. With the generated XML schema, you can now generate an XML instance document representing the service request message. Right-click on the XML schema file and select Generate -> XML File ... from the context menu as Figure 43 shows.
    Figure 43. Generate XML instance document from schema
    Generate XML instance document from schema
  7. Provide a file name and location for the XML instance file to be generated. Here you can keep the defaults. Click Next.
    Figure 44. Specify name and location of XML request document
    Specify name and location of XML request document
  8. Select getEmployeeById as the XML Schema element you want to generate the XML instance document for. Data Web Services uses the operation name as the request message root name. Hit Finish.
    Figure 45. Select the XML root element to generate the XML instance document for
    Select the XML root element to generate the XML instance document for
  9. Provide a valid value for the empno tag and save the file.
    Figure 46. Final XML request message document
    Final XML request message document

Test the HTTP POST XML Binding using cURL

cURL

cURL is a freeware program that is used widely to send HTTP requests from the command line. We use this program to test our examples. To download cURL, go to the cURL Web site at the following address:http://curl.haxx.se
You can use cURL to test the service. CURL is a command line tool and needs to be run from a terminal or DOS command window. The appropriate cURL command to invoke the service would look like this:
curl -X POST -H "Content-Type: text/xml" -v 
  -d@DataPowerSampleService.getEmployeeById.default.xml
  http://DataPower:2058/DataPowerSampleService/postXml/getEmployeeById
A short explanation of the used cURL parameters:
  • -X defines the HTTP method -- in this instance, use POST
  • -H allows you to set or add HTTP header fields -- in this instance, add the Content-Type header to define text/xml as the request message type
  • -d specifies the data to be sent with the request -- here, define the XML request instance document you generated earlier with Data Studio Developer
  • -v turns on the verbose output
A short explanation of the request URL structure:
  • http:// defines the request protocol -- in this case, it's HTTP
  • DataPower defines the host name or TCP/IP address of the DataPower appliance -- that value needs to be adjusted according to your environment
  • 2058 the TCP/IP port you configured with the XSL Accelerator earlier
  • /DataPowerSampleService/postXml/ matches the URL pattern with defined in the processing rule
  • getEmployeeById the XSL script uses this part of the URL to determine which operation to be executed
Here is the sample request message:
Listing 1. Sample request message
POST /DataPowerSampleService/postXml/getEmployeeById HTTP/1.1
User-Agent: curl/7.18.0 (i386-pc-win32) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3
Host: 9.30.9.160:2058
Accept: */*
Content-Type: text/xml
Content-Length: 314

<?xml version="1.0" encoding="UTF-8"?>
<tns:getEmployeeById xmlns:tns="http://ibm.com/example/DataPowerDWS">
   <empno>000130</empno>
</tns:getEmployeeById>
Here the sample response message:
Listing 2. Sample response message
HTTP/1.1 200 Good
User-Agent: curl/7.18.0 (i386-pc-win32) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3
Host: 9.30.9.160:2058
Content-Type: text/xml
Via: 1.1 DataPowerSampleService
X-Client-IP: 9.30.250.100
: Thu, 01 Jan 1970 00:00:01 GMT
Transfer-Encoding: chunked

<?xml version="1.0" encoding="UTF-8"?>

<ns1:getEmployeeByIdResponse xmlns:ns1="http://ibm.com/example/DataPowerDWS">
   <EMPNO>000130</EMPNO>
   <FIRSTNME>DELORES</FIRSTNME>
   <MIDINIT>M</MIDINIT>
   <LASTNAME>QUINTANA</LASTNAME>
   <WORKDEPT>C01</WORKDEPT>
   <PHONENO>4578</PHONENO>
   <HIREDATE>2001-07-28Z</HIREDATE>
   <JOB>ANALYST </JOB>
   <EDLEVEL>16</EDLEVEL>
   <SEX>F</SEX>
   <BIRTHDATE>1955-09-15Z</BIRTHDATE>
   <SALARY>73800.00</SALARY>
   <BONUS>2527.11</BONUS>
   <COMM>1904.00</COMM>
</ns1:getEmployeeByIdResponse>

Configure and test an XSL Accelerator for HTTP POST/GET binding (URL-encoded parameter)

Now it's time to implement the bindings for non-XML requests. In this case, the parameters are passed in as URL-encoded parameter/value pairs.
You can re-use the XSL Accelerator that you setup previously. All you need to do is add a new processing rule to the policy.

The XSL Accelerator set up

  1. Login to the DataPower WebGUI and click on the XSL Accelerator symbol:
    Figure 47. XSL Accelerator
    XSL Accelerator
  2. Click on the DataPowerSampleService XSL Accelerator:
    Figure 48. Select existing XSL Proxy
    Select existing XSL Proxy
  3. Select the "..." button next to the Proxy Policy to open the policy:
    Figure 49. Modify the Proxy Policy
    Modify the Proxy Policy
  4. Select the New Rule button to create a new rule for the URL-encoded bindings.
    Figure 50. Create a new rule
    Create a new rule
  5. Double-click on the URL pattern matcher icon and create a new matching rule - the same way as described in the HTTP POST XML binding setup before - called DataPowerSampleService_URL_encoded using /DataPowerSampleService/urlEncoded/* as URL pattern and save the new matching rule.
    Figure 51. URL pattern match icon
    URL pattern match icon
  6. Drag and drop a Transform action onto the flow:
    Figure 52. Drag and drop a transform action onto the flow
    Drag and Drop a Transform action onto the flow
  7. Double-click it to open the configuration Window and select DataPowerSampleService_http_get_post.xsl as the processing control file:
    Figure 53. Select XLS processing file for HTTP GET/POST binding
    Select XLS processing file for HTTP GET/POST binding
  8. Click Done at the bottom of the page.
  9. Now you need to turn the URL-encoded parameter list into an XML document. To do that, drag and drop an Advanced action onto the flow between the URL pattern matcher and the transform action as Figure 54 shows.
    Figure 54. Add an Advanced action to the flow
    Add an Advanced action to the flow
  10. Open the advanced action by double-clicking on the symbol. Select Convert Query Params to XML as the action type and click Next
    Figure 55. Select "Convert Query Params to XML" as action
    Select
  11. Simply click Done in the next dialog. Your policy rule should now look like the one in Figure 56.
    Figure 56. Finished policy rule configuration
    Finished Policy Rule configuration
  12. Click Apply Policy and close the policy window. Click also Apply on the XSL accelerator configuration dialog to persist the changes. SelectReview changesSave Config, and then Close.

Test the HTTP GET binding

Now the URL-encoded bindings are available. A client can either use an HTTP GET request and provide the input parameters in the URL query string, or an HTTP POST request can be used where the MIME-Type is application/x-www-form-urlencoded.
For example, to call the getEmployeeById operation via HTTP GET, the URL would look like this:
http://dataPower:2058/DataPowerSampleService/urlEncoded/getEmployeeById?empno=000130
You can now use the /DataPowerSampleService/urlEncoded/* URL pattern to activate the DataPowerSampleService_URL_encodedprocessing rule. Since HTTP GET has no meaning of a request message, you attach the input parameter as URL-encoded key value pairs in the URL query string.
Here is the sample response in a Web browser:
Figure 57. Response message for HTTP GET binding in a Web browser
Response message for HTTP GET binding in Web browser

Configuring and Testing a Web Service Proxy for SOAP over HTTP binding

Now you can configure the SOAP binding using a Web Service Proxy configuration. The SOAP binding setup requires an XML Firewall Loopback Proxy setup on the machine. The DataPower appliance only requires one of those. This tutorial describes the firewall loopback configuration as well before talking about the Web Service Proxy setup.

Step 1. Configure an XML firewall loopback proxy

The following steps describe the setup of an XML Loopback Firewall which will later be required when configuring the SOAP binding for the Data Web service.
  1. Login to the DataPower WebGUI and click on the XML Firewall symbol:
    Figure 58. XML Firewall icon
    XML Firewall icon
  2. Click the Add Wizard button and select Pass Thru as the Firewall type. Click Next.
  3. Give the new firewall a name - like MyLoopbackFirewall - and click Next as Figure 59 shows:
    Figure 59. Naming the XML Firewall
    Naming the XML Firewall
  4. Select loopback-proxy as the firewall type and click Next:
    Figure 60. Select loopback-proxy
    Select loopback-proxy
  5. Keep 0.0.0.0 as the device address, and choose an available TCP/IP port (the dialog usually provides the next available one automatically). Keep SSL disabled. Click Next:
    Figure 61. Keep default settings
    Keep default settings
  6. Confirm your settings and click Commit:
    Figure 62. Confirm settings
    Confirm settings
  7. The firewall is getting created and you will be prompted with a status page. Click Done to get to the XML firewall overview page.
  8. Click on the newly created firewall to see the "Configure XML Firewall" page.
  9. The default Request Type is SOAP. You may want to change it to unprocessed so that the firewall can handle any HTTP message - otherwise an HTTP 500 is returned if the request does not comply to the configured request type. Click on the firewall to open the configuration dialog. Set the Request Type to Pass-Thru.
    Figure 63. The final Loopback proxy configuration
    The final Loopback proxy configuration

Step 2. Configure the Web Service proxy

Now let's proceed with the SOAP over HTTP binding using the Web Service proxy.
  1. In the Control Panel overview, click on the Web Service Proxy symbol:
    Figure 64. The Web Service proxy icon
    The Web Service Proxy icon
  2. Select Add to create a new Web Service proxy.
  3. Give the Web Service Proxy the name DataPowerSampleService and click on the Create Web Service Proxy button.
    Figure 65. Name the Web Service proxy
    Naming the Web Service Proxy
  4. Use the DataPowerSampleService_dp.wsdl file for the WSDL File URL. Select off under Use WS-Policy References and click Next.
    Figure 66. Add the WSDL to the Web Service proxy
    Add the WSDL to the Web Service Proxy
  5. The next dialog requires multiple configuration steps. DataPower parsed the WSDL document and provides us with a list of endpoints defined in the WSDL. For this exercise we only care about the SOAP (over HTTP) endpoint.
    1. To activate the SOAP endpoint, you need to configure a local endpoint handler, a URI and the binding.
    2. To assign an endpoint handler, you can either select an existing service handler from the drop-down box or create a new one by clicking the +button. When creating a new service handler, choose the HTTP Front Side Handler option (see Figure 67).
      Figure 67. Create a new Front Side Handler
      Create a new Front Side Handler
    3. A new dialog window opens where you can specify the characteristics of the handler. Give the handler a name (like DataPowerSampleServiceFSH). You do at least need the support of the HTTP POST method to support SOAP over HTTP. The TCP/IP port used in the handler configuration will be used for the endpoint URLs later on (in our case 4444).
      Figure 68. Configure the HTTP front side handler
      Configure the HTTP Front Side Handler
    4. The last settings to be made on the Local interface of the SOAP endpoint is the endpoint URI and the SOAP version. You can keep the defaults (URI: /<ServiceName>/soapEndpoint; SOAP version: 1.1). Click the + Add link to add your local endpoint handler configuration.
    5. Now you need to configure the Remote interface for the SOAP endpoint. Since DataPower is supposed to act as a Web service endpoint rather than forwarding the request to another server, you have to point to the local XML Loopback firewall which you created before. The protocol is HTTP, the IP Address is the loopback device (127.0.0.1), the port is 2050 (the one we configured for the loopback proxy). The endpoint URL doesn't matter -- you can keep it as is.
      Figure 69. Final Web service port settings
      Final Web service port settings
    6. Click Next and Save Config in the upper right to persist the settings. This concludes the basic Web Service configuration. Now you need to take care of the implementation for the Web service.
  6. To configure the service implementation, click on the Policy tab. That gives you a view of the WSDL structure. It allows you to apply rules on different levels. You will add two rules on the HTTP SOAP port level.
    Figure 70. Add processing rule
    Add processing rule
  7. Expand SOAP over HTTP port. You should add a rule for this port which implements the logic for all operations defined in that Web service. This rule contains an XSLT document (generated by Data Studio) which does implement the access to DB2:
    1. Click on + Add Rule at the port level. This opens up a new dialog where you can specify the rule. Change the Rule Direction to Client to Server. Ensure that the match Action (which is the little black square with the equal sign) does match all incoming URLs. By hovering over it, you should see URL: *:
      Figure 71. URL pattern match rule
      URL pattern match rule
    2. Drag and drop the Transform icon onto the flow line. This adds an XSLT transformer to the process which implements the database access. Double-click on the icon to modify the transformation implementation. Upload the DataPowerSampleService_soap.xsl file which implements the SOAP binding for the Web service and assign it to this transform action. Click Done.
      Figure 72. Transform action configuration
      Transform action configuration
    3. Finally, select the Apply button to conclude the Web service configuration. Figure 73 shows the policy rule definition for the Web service:
      Figure 73. Final policy setup
      Final policy setup
This concludes the Web Service configuration.

Test the SOAP over the HTTP binding

A SOAP request can now be sent to the following endpoint URL. The deployed WSDL document can be requested using the following URL:
http://datapower:4444/DataPowerSampleService/soapEndpoint?wsdl
The SOAP endpoint URL is:
http://datapower:4444/DataPowerSampleService/soapEndpoint
You can use the WSDL URL with the Web Service Explorer in Data Studio Developer by performing the following steps:
  1. Open the JavaEE perspective by selecting Window -> Open Perspective -> Other... from the top menu.
    Figure 74. Open the JavaEE Perspective
    Open the JavaEE Perspective
  2. Start the Web Service Explorer tool by selecting Run -> Launch the Web Service Explorer from the top menu.
    Figure 75. Launch the Web Service Explorer
    Launch the Web Service Explorer
  3. You will see a set of small icons in the upper right corner of the Web Service Explorer top menu bar. Click on the 2nd one from the right which says WSDL Page when you hover over it.
    Figure 76. Open the WSDL Main page in the Web Service Explorer
    Open the WSDL Main page in the Web Service Explorer
  4. Select the WSDL Main option in the left Navigator pane, enter the WSDL URL in the input field on the right pane, and click OK. This loads the WSDL file into the Web Service Explorer.
    Figure 77. Load WSDL file
    Load WSDL file
  5. If the WSDL was loaded successfully, a new entry will appear in the left hand Navigator pane under WSDL Main. Expand the entry and drill down to any of the Web service operations listed there. Select one of the operations and provide the appropriate input values in the right hand pane. Click OK to execute the Web service. You will see the Web service response displayed in the lower right pane in case the Web service runs successfully.
    Figure 78. Test the SOAP over HTTP binding with the Web Service Explorer
    Test the SOAP over HTTP binding with the Web Service Explorer

Conclusion

In this tutorial you learned how to use IBM Data Studio Developer to create a Data Web service and generate runtime artifacts for the WebSphere DataPower XI50 Integration Appliance. Furthermore, you learned how to configure a DB2 data source on the Web Sphere DataPower XI50 Integration Appliance, to upload the generated artifacts and configure the different service bindings for the Data Web service. Finally you used simple clients - like a Web browser, CURL or the Web Services Explorer - to test the different service bindings.

You can find more information on IBM WebSphere Data Power, IBM Data Studio Developer, and Data Web services under the Resources section of this tutorial in case you want to take a deeper dive into the products used here.