The IP name signifies the machine name , the physical machine that is being referred. A cluster is nothing but a group of machines.
It is a unique identifier created while familiarizing the controls. During familiarisation, a configuration is automatically created at the jiffy server, with the format being _Cluster_Name
The componentisation functionality of Jiffy enables you to achieve extreme re-usability. You can create your own re-usable components and share it across projects reducing the time to automate further and further helps to standardize any common procedures.
JIFFY users are provided with the functionality where frequently used Jiffy steps (a group of jiffy nodes/ automation steps) that performs a subtask can be made as a component and can be reused across tasks/processes. This avoids duplication of effort and data as well as reduces maintainability. Any changes to the subtask can be directly done in the bundled component which will automatically change all the tasks using that bundled component.
Please refer to tour section on Resuability and Maintenance < give link >
Ideally, we will need to first identify and qualify the process for RPA before we embark into any automation activity. This would involve defining parameters like efficiency, accuracy, quality, cost and the possible FTE benefit when compared to the existing process
When the above has been ascertained, next steps would be to understand what kind of automations will be required for it - UI, DB, Rule Based, Cognitive and then define the phased approach which would start with a Discovery Phase, detailed Scope definition, define Exceptions, possible Risks and mitigation strategies and a details implementation and testing phase before it is deployed into production.
Please speak to our RPA experts to get further insights into defining this RPA journey for you
Make sure you log in with the same credentials that you used while configuring the RPASS bot.
First, check if your spark process is running or not, if not open jiffy server and start the Spark using the below command.
/opt/docube/spark-2.3.2-bin-hadoop2.7/sbin/start-all.sh.
[docubeapp-usr@docubedev sbin]$ ps -ef | grep spark
docubea+ 53539 1 10 08:23 pts/1 00:00:03 /opt/jdk1.8.0_131/jre/bin/java -cp /opt/docube/spark-2.3.2-bin-hadoop2.7/conf/:/opt/docube/spark-2.3.2-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host docubedev --port 7077 --webui-port 8080
docubea+ 53660 1 18 08:23 ? 00:00:03 /opt/jdk1.8.0_131/jre/bin/java -cp /opt/docube/spark-2.3.2-bin-hadoop2.7/conf/:/opt/docube/spark-2.3.2-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://docubedev:7077
docubea+ 53744 53003 0 08:23 pts/1 00:00:00 grep --color=auto spark
If you want to create dynamic page numbers in task design, give the number of pages as the maximum number of pages , and even if there are less pages it will stop at that particular end page.
The user needs to restart the server with proper configurations to solve this issue.
Follow these steps to stop the services:
1. Stop the JiffyWindowServices.
2. Stop all the conhost.exe / cmd.exe
3. Stop the JPopeye.exe
4. Stop all the java.exe processes.
To start the services again,
1. Start the JiffyWindowServices.
2. Start the JPopeye from the Start menu.
3. Start your bot and run your task.