HFM audit data export utility availability in version 11
Hi Experts,
We have a client who has a HFM environment where the audit & task logs grow very large very quickly.
They need to be able to archive and clear the logs. They are too large for EPM Maestro to handle and they don't want to schedule them as a regular event.
I am concerned because I am sure that these large log tables are impacting performance.
They want to know if the old system 9 utility they used to use is still available in the latest version. It was called the HFM audit data export utility. Does anyone know?
Thanks in advance and kind regards
Jean
I know this is a reasonably old post but I found it through Google. To help those in the future, this utility is available via Oracle Support. It is HFM Service Fix 11.1.1.2.05 but it is compatible up to 11.1.1.3.
Here is the Oracle Support KB Article:
*How To Extract the Data Audit and Task Audit records of an HFM application to a File [ID 1067055.1]*+
Modified 23-MAR-2010 Type HOWTO Status PUBLISHED
Applies to:
Hyperion Financial Management - Version: 4.1.0.0.00 to 11.1.1.3.00 - Release: 4.1 to 11.1
Information in this document applies to any platform.
Goal
Some system administrators of Financial Management desire a method to archive / extract the information from the DATA_AUDIT and TASK_AUDIT database tables of an HFM application before truncating those tables.
Solution
Oracle provides a stand alone utility called HFMAuditExtractUtilitiy.exe to accomplish this task. As well as extracting the records of the two log tables, the utility can also be used to truncate the tables at the same time.
The utility comes with a Readme file which should be consulted for more detailed instructions on how it should be used.
The latest version of the utility which is compatible with all versions of HFM up to 11.1.1.3 is available as Service Fix 11.1.1.2.05 (Oracle Patch 8439656).
Edited by: Fredric J Parodi on Nov 5, 2010 9:43 AM
Similar Messages
-
Data only u nload/export utility available ?
Hello,
I need to unload/export ONLY the data in all the tables in a specific schema in one of our databases, other than :-
a) Writing a script to SELECT * from all table in the schema
b) Using 3rd party software
are there any utilities available in native Oracle to achive this. Export/Expdp also output the table structures in the schema in addition to the data which is not what I want.
Thanks in advance,
Shaunhttp://www.oracle-base.com/articles/9i/GeneratingCSVFiles.php
You will have to query the data objects somehow, and if you don't want to use Third Party, you're options are pretty limited.
If you have SQL Server, take a look at DTS, it's easy to use and reasonably fast, if you use the correct driver. -
Data export(ttbulkcp) Oracle TimesTen Question
I'm trying to export an Oracle TimesTen(TimesTen Release 11.2.1.5.0) with ttbulkcp(Data export), using SQL Developer Version 2.1.1.64
All other functions are normal operation. Also ttbulkcp(Data export) in Oracle Table is normal operation. But , I get the following error in ttbulkcp(Data export) in TimesTen Table.
java.lang.NullPointerException
at oracle.dbtools.raptor.format.ResultsFormatterWrapper.getColumnCount(ResultsFormatterWrapper.java:67)
at oracle.dbtools.raptor.format.ResultsFormatter.getColumnCount(ResultsFormatter.java:130)
at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.getColumns(TimesTenLoaderFormatter.java:207)
at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.printColumnData(TimesTenLoaderFormatter.java:183)
at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.start(TimesTenLoaderFormatter.java:73)
at oracle.dbtools.raptor.format.ResultSetFormatterWrapper.print(ResultSetFormatterWrapper.java:150)
at oracle.dbtools.raptor.format.ResultsFormatter.print(ResultsFormatter.java:200)
at oracle.dbtools.raptor.format.ResultsFormatter.doPrint(ResultsFormatter.java:416)
at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:637)
at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:634)
at oracle.dbtools.raptor.backgroundTask.RaptorTask.call(RaptorTask.java:193)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$RaptorFutureTask.run(RaptorTaskManager.java:492)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Driver not capable
Thank you.
GooGyumIf you have a DB support contract, I suggest you open a SR on Metalink/MOS, to get official response and follow-up, else you can hope someone from development picks it up here.
Regards,
K. -
Exporting whole database (10GB) using Data Pump export utility
Hi,
I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
Please correct me if i am wrong and kindly help.
Thanks for your help in advance.You need to create a directory object.
sqlplus user/password
create directory foo as '/path_here';
grant all on directory foo to public;
exit;
then run you expdp command.
Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
Dean -
HFM Taskflow Email and Data Export Not Working
Hello all,
I am attempting to set up the HFM taskflow email and Data export functionality. Unfortunately, the email is not working. I completed the set up of my SMTP mail using the EPM Configurator, and tested the email funtiontionality for FR and it works well. Not the case for Taskflow.
Is there an addtional seetings that needs to be configured. I am also unable to extract data using the taskflow. The export path is specified in the textbox, but the file is not exporting to the server location. Is there any type of port or setting that needs to be configured.
I am on version 11.1.2.1. Please help
Edited by: 944327 on 17/09/2012 14:52I wanted to create a thread about this too. There is a big problem in Lightrooms email sending. Photos are sent inline but not as attachments. They need to be sent as attachments so that the email clients can save them easily.
This is actually important to some and responses by Adobe have not understood this.
--2510201414571280874==00004
content-id: <AEDF354435956231.560091>
content-description: L1120161.jpg
content-transfer-encoding: BASE64
content-disposition: inline; filename="L1120161.jpg"
content-type: image/jpg; name="L1120161.jpg"
As for your low res image. It is probably the setting you used, you might have it set to "email quality"
Ankur -
Java exception: Planning Data Form Import/Export Utility: FormDefUtil.sh
Hi,
We have the following in our environment
Oracle 10gAS (10.1.3.1.0)
Shared Services (9.3.1.0.11)
Essbase Server (9.3.1.3.01)
Essbase Admin Services (9.3.1.0.11)
Provider Services (9.3.1.3.00)
Planning (9.3.1.1.10)
Financial Reporting + Analysis UI Services (9.3.1.2)
I got the following error while using the Planning Data Form Import/Export Utility. Does anyone have any idea?
hypuser@server01>$PLANNING_HOME/bin/FormDefUtil.sh import TEST.xml localhost admin password SamApp
[May 6, 2009 6:25:11 PM]: Intializing System Caches...
[May 6, 2009 6:25:11 PM]: Loading Application Properties...
[May 6, 2009 6:25:11 PM]: Looking for applications for INSTANCE: []
[May 6, 2009 6:25:13 PM]: The polling interval is set =10000
Arbor path retrieved: /home/hypuser/Hyperion/common/EssbaseRTC/9.3.1
[May 6, 2009 6:25:14 PM]: Setting ARBORPATH=/home/hypuser/Hyperion/common/EssbaseRTC/9.3.1
Old PATH: /home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/AnalyticServices/bin:/home/hypuser/Hyperion/common/JRE-64/IBM/1.5.0/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/hypuser/bin:/usr/bin/X11:/sbin:.
[May 6, 2009 6:25:14 PM]: Old PATH: /home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/common/JRE/IBM/1.5.0/bin:/home/hypuser/Hyperion/AnalyticServices/bin:/home/hypuser/Hyperion/common/JRE-64/IBM/1.5.0/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/hypuser/bin:/usr/bin/X11:/sbin:.
java.lang.UnsupportedOperationException
at com.hyperion.planning.olap.HspEssbaseEnv.addEssRTCtoPath(Native Method)
at com.hyperion.planning.olap.HspEssbaseEnv.init(Unknown Source)
at com.hyperion.planning.olap.HspEssbaseJniOlap.<clinit>(Unknown Source)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:187)
at com.hyperion.planning.HspJSImpl.createOLAP(Unknown Source)
at com.hyperion.planning.HspJSImpl.<init>(Unknown Source)
at com.hyperion.planning.HspJSHomeImpl.createHspJS(Unknown Source)
at com.hyperion.planning.HspJSHomeImpl.getHspJSByApp(Unknown Source)
at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
at com.hyperion.planning.utils.HspFormDefUtil.main(Unknown Source)
Setting Arbor path to: /home/hypuser/Hyperion/common/EssbaseRTC/9.3.1
[May 6, 2009 6:25:15 PM]: MAX_DETAIL_CACHE_SIZE = 20 MB.
[May 6, 2009 6:25:15 PM]: bytesPerSubCache = 5654 bytes
[May 6, 2009 6:25:15 PM]: MAX_NUM_DETAIL_CACHES = 3537
Setting HBR Mode to: 2
Unable to find 'HBRServer.properties' in the classpath
HBR Configuration has not been initialized. Make sure you have logged in sucessfully and there are no exceptions in the HBR log file.
java.lang.Exception: HBR Configuration has not been initialized. Make sure you have logged in sucessfully and there are no exceptions in the HBR log file.
HBRServer.properties:HBR.embedded_timeout=10
HBR Configuration has not been initialized. Make sure you have logged in sucessfully and there are no exceptions in the HBR log file.
java.lang.ExceptionInInitializerError
at java.lang.J9VMInternals.initialize(J9VMInternals.java:205)
at com.hyperion.hbr.api.thin.HBR.<init>(Unknown Source)
at com.hyperion.hbr.api.thin.HBR.<init>(Unknown Source)
at com.hyperion.planning.db.HspFMDBImpl.initHBR(Unknown Source)
at com.hyperion.planning.db.HspFMDBImpl.initializeDB(Unknown Source)
at com.hyperion.planning.HspJSImpl.createDBs(Unknown Source)
at com.hyperion.planning.HspJSImpl.<init>(Unknown Source)
at com.hyperion.planning.HspJSHomeImpl.createHspJS(Unknown Source)
at com.hyperion.planning.HspJSHomeImpl.getHspJSByApp(Unknown Source)
at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
at com.hyperion.planning.utils.HspFormDefUtil.main(Unknown Source)
Caused by: Exception HBR Configuration has not been initialized. Make sure you have logged in sucessfully and there are no exceptions in the HBR log file.
ClassName: java.lang.Exception
at com.hyperion.hbr.common.ConfigurationManager.getServerConfigProps(Unknown Source)
at com.hyperion.hbr.cache.CacheManager.<clinit>(Unknown Source)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:187)
... 11 more
[May 6, 2009 6:25:15 PM]: Regeneration of Member Fields Complete
[May 6, 2009 6:25:16 PM]: Thread main acquired connection com.hyperion.planning.olap.HspEssConnection@5f0e5f0e
[May 6, 2009 6:25:16 PM]: Thread main releasing connection com.hyperion.planning.olap.HspEssConnection@5f0e5f0e
[May 6, 2009 6:25:16 PM]: Thread main released connection com.hyperion.planning.olap.HspEssConnection@5f0e5f0e
[May 6, 2009 6:25:16 PM]: Need to create an Object. pool size = 0 creatredObjs = 1
java.lang.RuntimeException: Unable to aquire activity lease on activity 1 as the activity is currently leased by another server.
at com.hyperion.planning.sql.actions.HspAquireActivityLeaseCustomAction.custom(Unknown Source)
at com.hyperion.planning.sql.actions.HspAction.custom(Unknown Source)
at com.hyperion.planning.sql.actions.HspActionSet.doActions(Unknown Source)
at com.hyperion.planning.sql.actions.HspActionSet.doActions(Unknown Source)
at com.hyperion.planning.HspJSImpl.aquireActivityLease(Unknown Source)
at com.hyperion.planning.HspJSImpl.reaquireActivityLease(Unknown Source)
at com.hyperion.planning.utils.HspTaskListAlertNotifier.reaquireTaskListActivityLease(Unknown Source)
at com.hyperion.planning.utils.HspTaskListAlertNotifier.processTaskListAlerts(Unknown Source)
at com.hyperion.planning.utils.HspTaskListAlertNotifier.run(Unknown Source)
[May 6, 2009 6:25:16 PM]: Fetching roles list for user took time: Total: 42
[May 6, 2009 6:25:16 PM]: Entering method saveUserIntoPlanning
[May 6, 2009 6:25:16 PM]: User role is:0
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HP:0005,ou=HP,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:1,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:9,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:3,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:7,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:14,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:15,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:10,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:12,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:13,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:1,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:9,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:3,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:7,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:14,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:15,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:10,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:12,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Skipping unused HUB role: native://DN=cn=HUB:13,ou=HUB,ou=Roles,dc=css,dc=hyperion,dc=com?ROLE
[May 6, 2009 6:25:16 PM]: Hub Roles for user is:991
[May 6, 2009 6:25:16 PM]: Exiting method saveUserIntoPlanning
[May 6, 2009 6:25:16 PM]: Saved the user admin to Planning
[May 6, 2009 6:25:16 PM]: Entering method persistUserChanges()
[May 6, 2009 6:25:16 PM]: Exiting method persistUserChanges()
[May 6, 2009 6:25:16 PM]: Before calling getGroupsList for user from CSS
[May 6, 2009 6:25:16 PM]: After getGroupsList call returned from CAS with groupsList [Ljava.lang.String;@705c705c
[May 6, 2009 6:25:16 PM]: Fetching groups list for user took time: Total: 4
[May 6, 2009 6:25:16 PM]: Entering method persistGroupChanges()
[May 6, 2009 6:25:16 PM]: Exiting method persistGroupChanges()
[May 6, 2009 6:25:16 PM]: User synchronization of 1 user elapsed time: 81, Users: 72, Groups: 9.
[May 6, 2009 6:25:16 PM]: Didnt add child Forms, couldnt find parent 1
Add/Update form under form folder - Corporate
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspObject Object Type: -1 Primary Key: 1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspPDFPrintOptions Object Type: -1 Primary Key: 1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspForm Object Type: 7 Primary Key: 1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspPDFPrintOptions Object Type: -1 Primary Key: 1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspAnnotation Object Type: 14 Primary Key: 1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspAccessControl Object Type: 15 Primary Key: 50001,1454699 ]
[May 6, 2009 6:25:21 PM]: Propegating external event[ FROM_ID: 7a7ebc1d Class: class com.hyperion.planning.sql.HspFormDef Object Type: -1 Primary Key: 1454699 ]
Form imported complete.
[May 6, 2009 6:25:21 PM]: Could not get HBR connection.Hi,
When I run the Formdefutil command, forms were imported successfully. But I got the following message.
Could not get HBR connection.
What does it mean?
Thanks & Regards,
Sravan Kumar. -
EXP utillity has higher version than database - export failed !
Hi,
I have a Oracle8 Enterprise Edition Release 8.0.5.0.0 database on our server and I have installed a 9i client on my PC. Now I have to use the export utility to export a specific table from our DB, but as I read from 'Oracle9i Database Utilities' book:
"The version of the Export utility must be equal to the lowest version of the source or target database".
So, I guess I need to install a 8.0.5 client on my PC (I have W2K) but nobody can give me the install CD in the company:(
I've looked on the OTN page if I can find the needed version, but I found only the 8i (8.1.7) install kit. Do you think it shall work the EXP utility if I install that ? Or can I find somewhere else the 8.0.5 install kit ?
Thanks:
Szilard
PS: the error message what I got when I tried to export with 9i client from 8.0.5 DB:
C:\exp user/passw file=myexp.dmp tables=emp;
EXP-00056: ORACLE error 2248 encountered
ORA-02248: invalid option for ALTER SESSION
EXP-00000: Export terminated unsuccessfully
, but the same command line from a 9i DB worked...Since your target server is 8.0.5, you should have the EXP utility from a 8.0.x versions. Anything higher will not work.
HTH. -
Oracle data pump vs import/export utility
Hello all,
What is the difference between Oracle Data Pump and Import/Export utility? which on is the faster?Handle: user9362044
Status Level: Newbie
Registered: Jan 26, 2011
Total Posts: 31
Total Questions: 11 (7 unresolved)
so many questions & so few answers.
What is the difference between Oracle Data Pump and Import/Export utility?unwilling or incapable to Read The Fine Manual yourself?
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/toc.htm -
Can i use a higher version export utility to backup a database of lower version.
Export utility is at 10.1.0.2.0 and database version is 9.2.0.4.0.
cheers
kumareshCan i use a higher version export utility to backup a database of lower version.
If you have access to Metalink you should read Note:132904.1. headline: "The export fails if you use a higher release export version."
Also: I hope this is not your only "backup". Export really isn't a sensible backup solution.
Cheers, APC -
Data Export error when migrating cucm cluster on version 7.1.5 to cucm 10.0
Hi
Has anyone come across below? If so any suggestions for workaround?
01, 2014 11:54 PDT
STATUS
The task has been scheduled.
Oct 01, 2014 11:54 PDT
INFO
Export task action ID #154 with 1 node(s) scheduled.
Oct 01, 2014 11:54 PDT
STATUS
The task has started.
Oct 01, 2014 11:54 PDT
INFO
Export task action ID #154 with 1 node(s) started.
Oct 01, 2014 11:54 PDT
INFO
Export job for node xx.xx.xx started.
Oct 01, 2014 12:09 PDT
ERROR
Data export failed for node xx.xx.xx.
Oct 01, 2014 12:09 PDT
ERROR
Export job for node xx.xx.xx failed.
Oct 01, 2014 12:09 PDT
ERROR
1 nodes(s) in Export task action ID #154 failed: xx.xx.xx
Oct 01, 2014 12:09 PDT
ERROR
Task paused due to task action failures.Hi,
you can login PCD through putty for seeing the logs
file list activelog tomcat/logs/ucmap/log4j/ detail date
further, run
file tail activelog tomcat/logs/ucmap/logs4j/ucmap00001.log[for example]
regds,
aman -
Page Exporter Utility (PEU) 5 Script
I love this script, It really has potential to shave valuable time off of our production (exporting Hundreds of pages manually to single files is tedious and time consuming). The problem I'm having is I'm hesitant to us it because it dosn't show warnings when there is a missing link or font. There's also no warning when it overwrites a file. My question is dose anyone know how to unsuppress the warnings? I've looked over the script and don't see anything that is actively suppressing them.
Thanks
Dan
// PageExporterUtility5.0.js
// An InDesign CS JavaScript
// 08 NOV 2007
// Copyright (C) 2007 Scott Zanelli. Lonelytree Software. (www.lonelytreesw.com)
// Coming to you from Quincy, MA, USA
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
var peuINFO = new Array();
peuINFO["csVersion"] = parseInt(app.version);
// Save the old interaction level
if(peuINFO.csVersion == 3) { //CS1
peuINFO["oldInteractionPref"] = app.userInteractionLevel;
app.userInteractionLevel = UserInteractionLevels.interactWithAll;
else { //CS2+
peuINFO["oldInteractionPref"] = app.scriptPreferences.userInteractionLevel;
app.scriptPreferences.userInteractionLevel = UserInteractionLevels.interactWithAll;
// See if a document is open. If not, exit
if((peuINFO["numDocsToExport"] = app.documents.length) == 0){
byeBye("Please open a document and try again.",true);
// Global Variable Initializations
var VERSION_NAME = "Page Exporter Utility 5.0"
var pseudoSingleton = 0;
var progCurrentPage = 0;// Used for progress bar
var progTotalPages = 0; // Used for progress bar
var commonBatchINFO = getNewTempENTRY(); // Used if "Set For All Batch Jobs" is selected
commonBatchINFO["infoLoaded"] = false;
peuInit(); // Initialize commonly needed info
// Store information needed for batch processing (valid for single also)
var printINFO = new Array();
// Get all needed info by looping for each document being exported
for(currentDoc = 0; currentDoc < peuINFO.numDocsToExport; currentDoc++) {
// Get name of document and set it as the base name
// minus the extention if it exists
var tempENTRY = getNewTempENTRY(); // "Struct" for current document info
tempENTRY.theDoc = app.documents[currentDoc]
tempENTRY.singlePage = (tempENTRY.theDoc.documentPreferences.pagesPerDocument==1)?true:false;
tempENTRY.getOut = true;
var baseName = (tempENTRY.theDoc.name.split(".ind"))[0];
// Display the dialog box and loop for correct info
if((!commonBatchINFO.infoLoaded && peuINFO.batchSameForAll) || (!peuINFO.batchSameForAll) ){// get all info
do{
tempENTRY.getOut = true; // For handling input errors
var mainDialog = createMainDialog(tempENTRY.theDoc.name, currentDoc+1, peuINFO.numDocsToExport);
// Exit if cancel button chosen
if(!mainDialog.show() ){
mainDialog.destroy();
byeBye("Exporting has canceled by user.",peuINFO.sayCancel);
if(formatTypeRB.selectedButton == 4 - peuINFO.adjustForNoPS){
changePrefs();
tempENTRY.getOut = false;
continue;
// Read info from dialog items and keep as defaults and place in tempENTRY
peuINFO.defaultDir = outputDirs.selectedIndex;
// The index of the selected directory is the same as the
// index in the array of directories.
tempENTRY.outDir = peuINFO.dirListArray[peuINFO.defaultDir];
// Type of renaming to do
tempENTRY.nameConvType = namingConvention.selectedButton
// The base name to add page info to
tempENTRY.baseName = removeColons(removeSpaces(newBaseName.editContents) );
// The start [and end] page numbers
var startEndPgs = startEndPgs.editContents;
// Wether to do spreads or not
peuINFO.doSpreadsON = (doSpreads.checkedState)?1:0;
tempENTRY.doSpreadsON = peuINFO.doSpreadsON;
// Wether to send entire file as one document
peuINFO.doOneFile = (oneFile.checkedState)?1:0;
tempENTRY.doOneFile = peuINFO.doOneFile;
// Export format type
tempENTRY.formatType = formatTypeRB.selectedButton + peuINFO.adjustForNoPS;
// Set persistence during warnings
peuINFO.editStartEndPgs = startEndPgs;
baseName = tempENTRY.baseName;
peuINFO.exportDefaultType = tempENTRY.formatType;
// Determine if page replacement token exists when the page token option is used
if(peuINFO.pageNamePlacement == 2){
var temp = tempENTRY.baseName.indexOf("<#>");
if(temp == -1){//Token isn't there
alert("There is no page item token (<#>) in the base name. Please add one or change the page naming placement preference.");
tempENlert.getOut = false;
pseudoSingleton--; // Allow prefs to be accessed again, but just once
continue;
else
pseudoSingleton++;
else // Try to remove any <#>s as a precaution
tempENTRY.baseName = tempENTRY.baseName.replace(/<#>/g,"");
// Layer Versioning & Batch options
if(currentDoc < 1){
peuINFO.layersON = (doLayers.checkedState)?1:0;
if(peuINFO.numDocsToExport > 1){
peuINFO.batchSameForAll = (commonBatch.checkedState)?1:0;
peuINFO.batchON = (doBatch.checkedState)?1:0;
if(!peuINFO.batchON)
peuINFO.numDocsToExport = 1;
//Check if spreads chosen with 'Add ".L"' option as this isn't supported.
if(peuINFO.doSpreadsON && tempENTRY.nameConvType == 1){
alert ("Spreads cannot be used with the 'Add \".L\"' option.\nThis combination is not supported. (1.1)");
tempENTRY.nameConvType = 0;
tempENTRY.getOut = false;
continue;
else if(peuINFO.doSpreadsON && tempENTRY.nameConvType == 4){
alert ("Spreads cannot be used with the 'Numeric Override' option.\nThis combination is not supported. (1.2)");
tempENTRY.nameConvType = 0;
tempENTRY.getOut = false;
continue;
// Check if "Send Entire File At Once" is selected with JPG or EPS
if(peuINFO.doOneFile && tempENTRY.formatType > 1 ){
alert ("The 'Send Entire File At Once' option can only be used with PostScript or PDF formats. (1.3)");
tempENTRY.getOut = false;
continue;
// Check if: batch printing and using the "Same for all jobs options" and a page range other than "All" was used
if(peuINFO.doBatch && peuINFO.batchSameForAll && startEndPgs != "All"){
alert ("The 'Set For All Batch Jobs' option can only be used with a Page Range of 'All'. Page Range has been reset to 'All'. (1.4)");
startEndPgs = "All";
// Create page info, skip if doing entire file as one
var tempPageCount = 0;
if(tempENTRY.doOneFile)
tempPageCount = 1;
else{
// Get names of all the pages. Needed when pages are named using sectioning
tempENTRY = recordPgNames(tempENTRY);
// Check Page Validity and get Page counts of entered section(s)
var temp = checkPages(tempENTRY, startEndPgs);
tempENTRY = temp[0];
tempPageCount = temp[1];
temp = null; // Free it up
} while(!tempENTRY.getOut);
// Remove dialog from memory
mainDialog.destroy();
// Determine if tag will fit correctly
tempENTRY.useTag = usePgInfoTag(tempENTRY.theDoc.viewPreferences.horizontalMeasurementUnits,tempENTRY.theDoc.documentPreferences.pageWidth);
// Get the format info for this document
switch(tempENTRY.formatType){
case 0://PS
tempENTRY.psINFO = getPSoptions(tempENTRY.theDoc.name.split(".ind")[0]);
break;
case 1://PDF
tempENTRY.pdfPRESET = getPDFoptions(tempENTRY);
break;
case 2: // EPS Formatting
tempENTRY.epsINFO = getEPSoptions(tempENTRY.theDoc.name.split(".ind")[0]);
peuINFO.origSpread = app.epsExportPreferences.epsSpreads;// Used to reset to original state when done
app.epsExportPreferences.epsSpreads = peuINFO.doSpreadsON;
break;
case 3: // JPEG Formatting
tempENTRY.jpegINFO = getJPEGoptions(tempENTRY.theDoc.name.split(".ind")[0]);
break;
// If Specific Directory was chosen for the output directory, get it now
if(peuINFO.defaultDir == 0){
tempENTRY.outDir = getDirectory("Please select the output directory:",peuINFO.startingDirectory);
if(tempENTRY.outDir != null)
tempENTRY.outDir += "/";
else
byeBye("Exporting has been canceled by user.", peuINFO.sayCancel);
// Set the common elements for all batch jobs if it was selected
if(!commonBatchINFO.infoLoaded && peuINFO.batchSameForAll){
commonBatchINFO.infoLoaded = true;
commonBatchINFO.pageNamePlacement = tempENTRY.pageNamePlacement;
commonBatchINFO.outDir = tempENTRY.outDir;
commonBatchINFO.nameConvType = tempENTRY.nameConvType
commonBatchINFO.doSpreadsON = tempENTRY.doSpreadsON;
commonBatchINFO.doOneFile = tempENTRY.doOneFile;
commonBatchINFO.formatType = tempENTRY.formatType;
commonBatchINFO.psINFO = tempENTRY.psINFO;
commonBatchINFO.pdfPRESET = tempENTRY.pdfPRESET
commonBatchINFO.epsINFO = tempENTRY.epsINFO
commonBatchINFO.jpegINFO = tempENTRY.jpegINFO;
} // End each/first of batch
else{ // Get the base name for other batch jobs
do{
tempENTRY.getOut = true;
var nameDialog = app.dialogs.add({name:(VERSION_NAME + ": Base Name for \"" + tempENTRY.theDoc.name + "\"" + ((peuINFO.numDocsToExport==1)?"":" (" + (currentDoc+1) + " of " + peuINFO.numDocsToExport + " documents)") ), canCancel:true} );
with (nameDialog){
with (dialogColumns.add() ){
with(dialogRows.add() ){
staticTexts.add({staticLabel:"Enter the Base Name for \"" + tempENTRY.theDoc.name + "\""} );
var newBaseName = textEditboxes.add({editContents:baseName, minWidth:135} );
with(dialogRows.add() )
staticTexts.add({staticLabel:"", minWidth:400} );
if(!nameDialog.show() ){
nameDialog.destroy();
byeBye("User canceled export.",peuINFO.sayCancel);
else{
tempENTRY.baseName = removeColons(removeSpaces(newBaseName.editContents) );
nameDialog.destroy();
// Determine if page replacement token exists when the page token option is used
if(peuINFO.pageNamePlacement == 2){
var temp = tempENTRY.baseName.indexOf("<#>");
if(temp == -1){//Token isn't there
alert("There is no page item token (<#>) in the base name. Please add one or click cancel in the next dialog box.");
tempENTRY.getOut = false;
else // Try to remove any <#>s as a precaution
tempENTRY.baseName = tempENTRY.baseName.replace(/<#>/g,"");
}while(!tempENTRY.getOut);
// Get names of all the pages. Needed when pages are named using sectioning
tempENTRY = recordPgNames(tempENTRY);
// Set pgStart and pgEnd, forcing "All" pages to output
tempENTRY = (checkPages(tempENTRY, "All"))[0];
// The page count is all pages due to common batching
tempPageCount = tempENTRY.theDoc.pages.length;
// This info is common, get it from commonBatchINFO:
tempENTRY.pageNamePlacement = commonBatchINFO.pageNamePlacement;
tempENTRY.outDir = commonBatchINFO.outDir;
tempENTRY.nameConvType = commonBatchINFO.nameConvType
tempENTRY.doSpreadsON = commonBatchINFO.doSpreadsON;
tempENTRY.doOneFile = commonBatchINFO.doOneFile;
tempENTRY.formatType = commonBatchINFO.formatType;
tempENTRY.psINFO = commonBatchINFO.psINFO;
tempENTRY.pdfPRESET = commonBatchINFO.pdfPRESET
tempENTRY.epsINFO = commonBatchINFO.epsINFO
tempENTRY.jpegINFO = commonBatchINFO.jpegINFO;
// Get any layering info
if(peuINFO.layersON){
tempENTRY.layerINFO = layerManager(tempENTRY.theDoc);
if (tempENTRY.layerINFO == null) // Only one layer, turn it off for this doc
tempENTRY.layersON = false;
else
tempENTRY.layersON = true;
// Sum up pages for the grand total for use in progress bar
var temp = 1;
if(peuINFO.doProgressBar && tempENTRY.layersON){
// Figure tally for progress bar to include versions
for(i=0;i < tempENTRY.layerINFO.verControls.length; i++)
if (tempENTRY.layerINFO.verControls[i] == 1)
temp++;
if(!peuINFO.baseLaersAsVersion)
temp--;
progTotalPages += (tempPageCount*temp);
// All info for this doc is finally gathered, add it to the main printINFO array
printINFO.push(tempENTRY);
// Only one chance to change prefs: trigger singleton
pseudoSingleton++;
}// end of main for loop
savePrefs(); // Record any changes
// Initiallize progress bar if available
if(peuINFO.doProgressBar)
app.createProgressBar("Exporting Pages...", 0, progTotalPages, true);
// Export by looping through all open documents if using batch option, otherwise just the front document is exported
for(currentDoc = 0; currentDoc < printINFO.length; currentDoc++){
var currentINFO = printINFO[currentDoc];
// Set message in progress bar if available
if(peuINFO.doProgressBar){
var progCancel = app.setProgress(currentINFO.theDoc.name);
if(progCancel)
byeBye("User canceled export.",peuINFO.sayCancel);
// Set format options here so it's done just once per document
setExportOption(currentINFO);
// "Do one file" or PS/PDF with one page:
if (currentINFO.doOneFile || currentINFO.singlePage){
// Remove page token if it was entered and this name positioning option is set
currentINFO.baseName = currentINFO.baseName.replace(/<#>/g,"");
if(currentINFO.layersON){
var theLayers = currentINFO.theDoc.layers;
var baseControls = currentINFO.layerINFO.baseControls;
var versionControls = currentINFO.layerINFO.verControls;
var lastVersion = -1;
// Loop for versioning
for(v = 0; v < versionControls.length; v++){
if(!versionControls[v])
continue;
if(lastVersion != -1)// Turn the last layer back off
theLayers[lastVersion].visible = false;
lastVersion = v;
theLayers[v].visible = true;
currentINFO.outfileName = addPartToName(currentINFO.baseName, theLayers[v].name, peuINFO.layerBeforeON)
// Export this version
exportPage(currentINFO, PageRange.allPages);
// Advance progress bar if available
if(peuINFO.doProgressBar)
advanceBar();
// If Base layer/s is/are to be output as a version, do it now
if(peuINFO.baseLaersAsVersion){
lastVersion = -1;
// Turn off all versioning layers
for(v = 0; v < currentINFO.layerINFO.baseControls.length; v++){
if(currentINFO.layerINFO.baseControls[v])// its a base layer, keep track of last base version layer number
lastVersion = v;
else
theLayers[v].visible = false;
if (!lastVersion == -1){// Only export if there was a base version
currentINFO.outfileName = addPartToName(currentINFO.baseName, theLayers[lastVersion].name, peuINFO.layerBeforeON);
// Export the base layer(s)
exportPage(currentINFO, PageRange.allPages);
// Advance progress bar if available
if(peuINFO.doProgressBar)
advanceBar();
else{ // No layer versioning, just export
currentINFO.outfileName = currentINFO.baseName;
// Export the base layer(s)
exportPage(currentINFO, PageRange.allPages);
// Advance progress bar if available
if(peuINFO.doProgressBar)
advanceBar();
if(!peuINFO.batchON)
byeBye("Done exporting as a single file.",true);
else{ // Do single pages/spreads
if (!currentINFO.hasNonContig)
// Pages are contiguous, can just export
outputPages(currentINFO.pgStart, currentINFO.pgEnd, currentINFO);
else{ // Export non-contiguous
// Loop through array of page sections
for (ii = 0; ii < currentINFO.nonContigPgs.length; ii++){
temp = currentINFO.nonContigPgs[ii];
// Here we handle the start/end pages for any non-contig that has "-"
if (temp.indexOf("-") != -1){
temp2 = temp.split("-");
outputPages(temp2[0],temp2[1], currentINFO);
else // The non-contiguous page is a single page
outputPages(temp, temp, currentINFO);
// Set the spread settings back to what it was originally
try{
switch (currentINFO.formatType){
case 0: // PostScript Formatting
theDoc.printPreferences.printSpreads = peuINFO.origSpread;
break;
case 1: // PDF Formatting
currentINFO.pdfPRESET.exportReaderSpreads = peuINFO.origSpread;
break;
case 2: // EPS Formatting
app.epsExportPreferences.epsSpreads = peuINFO.origSpread;
break;
case 3: // JPEG Formatting
app.jpegExportPreferences.exportingSpread = peuINFO.origSpread;
break;
catch(e){/*Just ignore it*/}
byeBye("The requested pages are done being exported.",true); // Last line of script execution
/* Operational Functions */
* Handle exporting
function outputPages(pgStart, pgEnd, currentINFO){
var pgRange;
var layerName = "";
var numVersions;
var currentPage;
var lastVersion = -1;
var numericallyLastPage;
if (currentINFO.layersON){
var theLayers = currentINFO.theDoc.layers;
var baseControls = currentINFO.layerINFO.baseControls;
var versionControls = currentINFO.layerINFO.verControls;
numVersions = versionControls.length;
// Compensate for base layers as a version
if(peuINFO.baseLaersAsVersion)
numVersions++;
else
numVersions = 1;
for (v = 0; v < numVersions; v++){
if(currentINFO.layersON){
if(v == (numVersions - 1) && peuINFO.baseLaersAsVersion){
var currentLayer = -1;
// Base layer(s) are to be output as a version
// Turn off all versioning layers
for(slbm = 0; slbm < baseControls.length; slbm++){
if(baseControls[slbm])// its a base layer, use its name for page name
currentLayer = slbm;
else
theLayers[slbm].visible = false;
// Check if there was no base layer at all
if (currentLayer == -1)
layerName = "**NO_BASE**"
else
layerName = theLayers[currentLayer].name;
else{
if(!versionControls[v])
continue;
if(lastVersion != -1)// Turn the last layer back off
theLayers[lastVersion].visible = false;
lastVersion = v;
theLayers[v].visible = true;
layerName = theLayers[v].name;
if (currentINFO.nameConvType == 4){
currentPage = pgStart;
numericallyLastPage = pgEnd;
else if (currentINFO.doSpreadsON){
currentPage = pgStart - 1;
numericallyLastPage = pgEnd;
else {
currentPage = getPageOffset(pgStart, currentINFO.pageNameArray , currentINFO.pageRangeArray);
numericallyLastPage = getPageOffset(pgEnd, currentINFO.pageNameArray, currentINFO.pageRangeArray);
if(layerName != "**NO_BASE**"){
do{
currentINFO.outfileName = getPageName(currentPage, layerName, currentINFO);
if (currentINFO.doSpreadsON){
pgRange = currentINFO.pageRangeArray[getPageOffset(currentINFO.theDoc.spreads[currentPage].pages[0].name, currentINFO.pageNameArray, currentINFO.pageRangeArray)];
else if (currentINFO.nameConvType == 4)
pgRange = currentINFO.pageRangeArray[currentPage-1];
else
pgRange = currentINFO.pageRangeArray[currentPage];
// Do the actual export:
exportPage(currentINFO, pgRange);
// Update progress bar if available
if(peuINFO.doProgressBar)
advanceBar();
currentPage++;
} while(currentPage <= numericallyLastPage);
* Export the page
function exportPage(currentINFO, pgRange){
var outFile = currentINFO.outDir + currentINFO.outfileName;
switch (currentINFO.formatType){
case 0: // PostScript Formatting
with(currentINFO.theDoc.printPreferences){
printFile = new File(outFile + ((currentINFO.psINFO.ext)?".ps":""));
pageRange = pgRange;
// Needed to get around blank pages using separations
try{
currentINFO.theDoc.print(false);
catch(e){/*Just skip it*/}
break;
case 1: // PDF Formatting
app.pdfExportPreferences.pageRange = pgRange;
currentINFO.theDoc.exportFile(ExportFormat.pdfType, (new File(outFile + ".pdf")), false, currentINFO.pdfPRESET);
break;
case 2: // EPS Formatting
app.epsExportPreferences.pageRange = pgRange;
currentINFO.theDoc.exportFile(ExportFormat.epsType, (new File(outFile + ".eps")), false);
break;
case 3: // JPEG Formatting
if(pgRange == PageRange.allPages){
app.jpegExportPreferences.jpegExportRange = ExportRangeOrAllPages.exportAll;
else{
app.jpegExportPreferences.jpegExportRange = ExportRangeOrAllPages.exportRange;
app.jpegExportPreferences.pageString = pgRange;
currentINFO.theDoc.exportFile(ExportFormat.jpg, (new File(outFile + ".jpg")), false);
break;
* Create a name for the page being exported
function getPageName(currentPage, layerName, currentINFO){
var pgRename = "";
if (currentINFO.doSpreadsON)
currentINFO["currentSpread"] = currentINFO.theDoc.spreads[currentPage].pages;
switch (currentINFO.nameConvType){
case 3: // Odd/Even pages/spreads = .LA.F/LA.B, LB.F/LB.B ...
pgRename = makeLotName(currentPage+1, peuINFO.subType);
break;
case 2: // Odd/Even pages/spreads = .F/.B
pgRename = ((currentPage+1)%2 == 0) ? "B" : "F";
break;
case 1: // Add ".L" to the page name
pgRename = "L" + currentINFO.pageNameArray[currentPage];
break;
case 0: case 4:// As is or Numeric Override
// Optionally add "P" and any zeros if options chosen and is numerically named
// otherwise, just the "seperatorChar" is added to page name
if (currentINFO.doSpreadsON){
// Loops through number of pages per spread
// and adds each page name to the final name (P08.P01)
for (j = 0; j < currentINFO.currentSpread.length; j++){
if(currentINFO.currentSpread[j].appliedSection.includeSectionPrefix)
var tempPage = currentINFO.pageRangeArray[getPageOffset(currentINFO.currentSpread[j].name, currentINFO.pageNameArray, currentINFO.pageRangeArray)];
else
var tempPage = currentINFO.pageNameArray[getPageOffset(currentINFO.currentSpread[j].name, currentINFO.pageNameArray, currentINFO.pageRangeArray)];
var tempPageNum = parseInt(tempPage,10);
/* If section name starts with a number, need to compare length of orig vs parsed
* to see if the page name is solely a number or a combo num + letter, etc.
if (! isNaN(tempPageNum) && ((""+tempPage).length == (""+tempPageNum).length )){
if (peuINFO.addZeroON)
tempPage = addLeadingZero(tempPageNum, currentINFO.theDoc.pages.length);
if (peuINFO.addPon)
tempPage = "P" + tempPage;
pgRename = (j==0) ? tempPage : pgRename + peuINFO.charList[peuINFO.seperatorChar] + tempPage;
else {
// Create a new name for an individual page
if (currentINFO.nameConvType == 4)
pgRename = currentPage;
else
pgRename = currentINFO.pageNameArray[currentPage];
if (! isNaN(parseInt(pgRename,10)) && (""+pgRename).length == (""+parseInt(pgRename,10)).length) {
if (peuINFO.addZeroON)
pgRename = addLeadingZero(pgRename, currentINFO.theDoc.pages.length);
if (peuINFO.addPon)
pgRename = "P" + pgRename;
break;
if(currentINFO.layersON)
pgRename = addPartToName(pgRename, layerName, peuINFO.layerBeforeON);
// Add page name to base name based on option selected
if(peuINFO.pageNamePlacement == 2)
pgRename = removeColons(currentINFO.baseName.replace(/<#>/g,pgRename) );
else
pgRename = addPartToName(currentINFO.baseName, pgRename,peuINFO.pageNamePlacement);
return pgRename;
* Add a name part before or after a given base string
function addPartToName(theBase, addThis, addBefore){
//Remove any colons
theBase = removeColons(theBase);
addThis = removeColons(addThis);
return (addBefore) ? (addThis + peuINFO.charList[peuINFO.seperatorChar] + theBase ):(theBase + peuINFO.charList[peuINFO.seperatorChar] + addThis);
* Find the offset page number for a page by its name
function getPageOffset(pgToFind, pageNameArray, pageRangeArray){
var offset;
for(offset = 0; offset<pageRangeArray.length;offset++){
if((""+ pgToFind).toLowerCase() == (("" + pageNameArray[offset]).toLowerCase() ) || (""+ pgToFind).toLowerCase() == (("" + pageRangeArray[offset]).toLowerCase() ) )
return offset;
return -1;
* Replace any colons with specialReplaceChar
function removeColons(tempName){
return tempName.replace(/:/g,peuINFO.charList[peuINFO.specialReplaceChar]);
* Remove spaces from front and end of name
function removeSpaces(theName){
// Trim any leading or trailing spaces in base name
var i,j;
for(i = theName.length-1;i>0 && theName.charAt(i) == " ";i--);// Ignore any spaces on end of name
for(j = 0; j<theName.length && theName.charAt(j) == " ";j++);// Ignore any spaces at front of name
theName = theName.substring(j,i+1);
return theName
* Add leading zero(s)
function addLeadingZero(tempPageNum, pageCount){
if(peuINFO.zeroPadding == 0){
// Normal padding
if((tempPageNum < 10 && pageCount < 100) || (tempPageNum > 9 && pageCount > 99 && tempPageNum < 100))
return addSingleZero(tempPageNum);
else if(tempPageNum < 10 && pageCount > 99)
return addDoubleZero(tempPageNum);
else
return ("" + tempPageNum);
}else if(peuINFO.zeroPadding == 1){
// Pad to 2 digits
if(tempPageNum < 10)
return addSingleZero(tempPageNum);
else
return ("" + tempPageNum);
}else{
// Pad to 3 digits
if(tempPageNum < 10)
return addDoubleZero(tempPageNum);
else if(tempPageNum < 100)
return addSingleZero(tempPageNum);
else
return ("" + tempPageNum);
* Add leading zero helper for single
function addSingleZero(pgNum){
return ("0" + pgNum);
* Add leading zero helper for double
function addDoubleZero(pgNum){
return ("00" + pgNum);
* Create lot name from page number
function makeLotName(thePage, subType){
var iii = thePage;
var curr = 0;
var alphaBet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
var lotName = "L";
if(subType == 0){
while(iii>52){
curr = Math.floor((iii-1)/52)-1;
lotName += alphaBet[curr];
if(curr >= 0)
iii -= 52*(1+curr);
else
iii -= 52;
lotName += alphaBet[Math.floor((iii-1)/2)%26];
else
for(iii=thePage; iii>0; iii-=52)
lotName += alphaBet[Math.floor((iii-1)/2)%26];
return lotName += (thePage & 0x1)?".F":".B";
* Advance progress bar one unit
function advanceBar(){
var progCancel = app.setProgress(++progCurrentPage);
if(progCancel)
byeBye("User canceled export.",peuINFO.sayCancel);
* Create an Empty tempENTRY "struct"
function getNewTempENTRY(){
var newTempENTRY = new Array();
newTempENTRY["theDoc"] = null;
newTempENTRY["singlePage"] = null;
newTempENTRY["getOut"] = null;
newTempENTRY["outDir"] = null;
newTempENTRY["outfileName"] = "";
newTempENTRY["nameConvType"] = null;
newTempENTRY["baseName"] = null;
newTempENTRY["doSpreadsON"] = null;
newTempENTRY["doOneFile"] = null;
newTempENTRY["formatType"] = null;
newTempENTRY["layersON"] = null;
newTempENTRY["hasNonContig"] = false;
newTempENTRY["nonContigPgs"] = null;
newTempENTRY["pageNameArray"] = new Array();
newTempENTRY["pageRangeArray"] = new Array();
newTempENTRY["psINFO"] = null;
newTempENTRY["pdfPRESET"] = null;
newTempENTRY["epsINFO"] = null;
newTempENTRY["jpegINFO"] = null;
newTempENTRY["layerINFO"] = null;
newTempENTRY["useTag"] = null;
newTempENTRY["pgStart"] = null;
newTempENTRY["pgEnd"] = null;
return newTempENTRY;
* Record all the page/spread names
function recordPgNames(tempENTRY){
// Get names of all the pages. Needed when pages are named using sectioning
for (i = 0; i < tempENTRY.theDoc.documentPreferences.pagesPerDocument; i++){
var aPage = tempENTRY.theDoc.pages.item(i);
tempENTRY.pageNameArray[i] = aPage.name;
tempENTRY.pageRangeArray[i] = (aPage.appliedSection.includeSectionPrefix)? aPage.name : (aPage.appliedSection.name + aPage.name);
return tempENTRY;
* Set the export options
function setExportOption(currentINFO){
// Set any options here instead of with each page
switch (currentINFO.formatType){
case 0: // PostScript Formatting
setPSoptions(currentINFO);
break;
case 1: // PDF Formatting
// Nothing to do
break;
case 2: // EPS Formatting
setEPSoptions(currentINFO.epsINFO);
break;
case 3: // JPEG Formatting
setJPEGoptions(currentINFO.jpegINFO);
break;
* Get PostScript format options
function getPSoptions(docName){
var psOptions = new Array();
psOptions["ignore"] = true;
var tempGetOut, PSdlog, pgHeight, pgWidth;
var changeAddPSextention, tempBaseName;
var printPreset;
do{
tempGetOut = true;
PSdlog = app.dialogs.add({name:"PostScript Options for \"" + docName + "\"", canCancel:true} );
with (PSdlog)
with (dialogColumns.add() ){
with (dialogRows.add() )
staticTexts.add({staticLabel:"Print Presets:"} );
printPreset = dropdowns.add({stringList:peuINFO.psPrinterNames , minWidth:236, selectedIndex:peuINFO.defaultPrintPreset} );
with (dialogRows.add() )
staticTexts.add({staticLabel:"Override PS Page Size (" + peuINFO.measureLableArray[peuINFO.measurementUnits] + ")"} );
with (borderPanels.add() )
with (dialogColumns.add() )
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Width:", minWidth:45} );
pgWidth = textEditboxes.add({editContents:"0", minWidth:53} );
staticTexts.add({staticLabel:"Height:", minWidth:45} );
pgHeight = textEditboxes.add({editContents:"0", minWidth:54} );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Add \".ps\" to end of file name"} );
changeAddPSextention = dropdowns.add({stringList:["No","Yes"], selectedIndex:peuINFO.addPSextention} )
if((PSdlog.show()) ){
// Get the page height + width
pgHeight = parseFloat(pgHeight.editContents);
pgWidth = parseFloat(pgWidth.editContents);
// Check entered H & W for error
if(isNaN(pgHeight) || isNaN(pgWidth) || pgHeight < 0 || pgWidth < 0 ){
alert ("Both page height and width must be numeric and greater than zero (3.1).");
pgHeight = "0";
pgWidth = "0";
tempGetOut = false;
continue;
if(pgHeight > 0 && pgWidth > 0) // User changed size, use the new size
psOptions.ignore = false;
psOptions["height"] = pgHeight + peuINFO.measureUnitArray[peuINFO.measurementUnits];
psOptions["width"] = pgWidth + peuINFO.measureUnitArray[peuINFO.measurementUnits];
psOptions["ext"] = changeAddPSextention.selectedIndex;
peuINFO.addPSextention = psOptions["ext"];
psOptions["preset"] = printPreset.selectedIndex
peuINFO.defaultPrintPreset = psOptions.preset;
savePrefs();
PSdlog.destroy();
else{
PSdlog.destroy();
byeBye("Exporting has been canceled by user.",peuINFO.sayCancel);
} while(!tempGetOut);
return psOptions;
* Set Postscript options
function setPSoptions(theINFO){
with(currentINFO.theDoc.printPreferences){
activePrinterPreset = peuINFO.csPSprinters[theINFO.psINFO.preset];
peuINFO.origSpread = printSpreads; // Used to reset to original state when done
printSpreads = theINFO.doSpreadsON;
if(colorOutput != ColorOutputModes.separations && colorOutput != ColorOutputModes.inripSeparations)
printBlankPages = true;
if (theINFO.useTag)
pageInformationMarks = true;
else
pageInformationMarks = false;
if(!theINFO.psINFO.ignore){
try{
paperSize = PaperSizes.custom;
paperHeight = theINFO.psINFO.height;
paperWidth = theINFO.psINFO.width;
catch(Exception){
alert ("The current PPD doesn't support custom page sizes. The page size from the Print Preset will be used (3.2).");
* Get PDF options
function getPDFoptions(theINFO){
var PDFdlog = app.dialogs.add({name:"PDF Options for \"" + theINFO.theDoc.name.split(".ind")[0] + "\"", canCancel:true} );
var temp = new Array();
for(i=0;i<app.pdfExportPresets.length;i++)
temp.push(app.pdfExportPresets[i].name);
// Test if default PDFpreset is greater # than actual list.
// This occurs if one was deleted and the last preset in the list was the default
if(peuINFO.defaultPDFpreset > temp.length-1)
peuINFO.defaultPDFpreset = 0;
with (PDFdlog)
with (dialogColumns.add() ){
with (dialogRows.add() )
staticTexts.add({staticLabel:"PDF Export Preset:"} );
pdfPresets = dropdowns.add({stringList: temp, minWidth:50, selectedIndex:peuINFO.defaultPDFpreset} );
if(PDFdlog.show() ){
temp = app.pdfExportPresets[pdfPresets.selectedIndex];
peuINFO.defaultPDFpreset = pdfPresets.selectedIndex;
peuINFO.origSpread = temp.exportReaderSpreads;
try{
temp.exportReaderSpreads = theINFO.doSpreadsON;
temp.pageInformationMarks = (theINFO.useTag && temp.cropMarks)?true:false;
}catch(e){/*ignore it*/}
PDFdlog.destroy();
return temp;
else{
PDFdlog.destroy();
byeBye("PDF exporting has been canceled by user.", peuINFO.sayCancel);
* Get JPEG options
function getJPEGoptions(docName){
var temp = new Array();
var JPEGdlog = app.dialogs.add({name:"JPEG Options for \"" + docName + "\"", canCancel:true} );
with (JPEGdlog)
with (dialogColumns.add() ){
with (dialogRows.add() )
staticTexts.add({staticLabel:"Quality:"} );
JPEGquality = dropdowns.add({stringList:(new Array("Low","Medium","High","Maximum")) , minWidth:50, selectedIndex:peuINFO.defaultJPEGquality} );
with (dialogRows.add() )
staticTexts.add({staticLabel:"Encoding Type:"} );
JPEGrender = dropdowns.add({stringList:["Baseline","Progressive"] , minWidth:50, selectedIndex:peuINFO.defaultJPEGrender } );
if(JPEGdlog.show() ){
peuINFO.defaultJPEGquality = JPEGquality.selectedIndex;
temp["qualityType"] = peuINFO.defaultJPEGquality;
peuINFO.defaultJPEGrender = JPEGrender.selectedIndex;
temp["renderType"] = peuINFO.defaultJPEGrender;
else{
JPEGdlog.destroy();
byeBye("JPEG exporting has been canceled by user.",peuINFO.sayCancel);
JPEGdlog.destroy();
return temp;
* Set JPEG options
function setJPEGoptions(theINFO){
with(app.jpegExportPreferences){
peuINFO.origSpread = exportingSpread; // Used to reset to original state when done
exportingSpread = currentINFO.doSpreadsON;
exportingSelection = false; // Export the entire page
if(peuINFO.csVersion > 3)
jpegExportRange = ExportRangeOrAllPages.exportRange;
switch (theINFO.qualityType){
case 0:
jpegQuality = JPEGOptionsQuality.low;
break;
case 1:
jpegQuality = JPEGOptionsQuality.medium;
break;
case 2:
jpegQuality = JPEGOptionsQuality.high;
break;
case 3:
jpegQuality = JPEGOptionsQuality.maximum;
break;
jpegRenderingStyle = (theINFO.renderType)? JPEGOptionsFormat.baselineEncoding : JPEGOptionsFormat.progressiveEncoding;
* Get EPS options
function getEPSoptions(docName){
var epsOptions = new Array();
var epsDialog = app.dialogs.add({name:"EPS Options for \"" + docName + "\"", canCancel:true} );
var oldBleed = peuINFO.bleed;
with (epsDialog){
// Left Column
with (dialogColumns.add() ){
with (dialogRows.add() )
with (borderPanels.add() )
with (dialogColumns.add() ){
with (dialogRows.add() )
staticTexts.add({staticLabel:"Flattener Presets:"} );
changeFlattenerPreset = dropdowns.add({stringList:peuINFO.flattenerNames , minWidth:180, selectedIndex:peuINFO.defaultFlattenerPreset} );
with (dialogRows.add() )
changeIgnoreOverride = checkboxControls.add({staticLabel:"Ignore Overrides", checkedState:peuINFO.ignoreON} );
with (dialogRows.add() )
staticTexts.add({staticLabel:"Preview Type:"} );
changePreviewPreset = dropdowns.add({stringList:peuINFO.previewTypes , minWidth:180, selectedIndex:peuINFO.defaultPreview} );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Bleed:"} );
changeBleedVal = realEditboxes.add({editValue:peuINFO.bleed, minWidth:60} );
staticTexts.add({staticLabel:peuINFO.measureLableArray[peuINFO.measurementUnits]} );
with (dialogRows.add() )
staticTexts.add({staticLabel:"OPI Options:"} );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Omit:"} );
changeOpiEPS = checkboxControls.add({staticLabel:"EPS", checkedState:peuINFO.epsON} );
changeOpiPDF = checkboxControls.add({staticLabel:"PDF", checkedState:peuINFO.pdfON} );
changeOpiBitmap = checkboxControls.add({staticLabel:"Bitmapped", checkedState:peuINFO.bitmapON} );
// Right column
with (dialogColumns.add() ){
with(borderPanels.add() ){
with(dialogColumns.add() ){
with (dialogRows.add() ){
staticTexts.add({staticLabel:"PostScript level:"} );
var changePSlevel = dropdowns.add({stringList:["2","3"] , minWidth:75, selectedIndex:peuINFO.psLevel} );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Color mode:"} );
if(peuINFO.csVersion == 3)
var changeColorMode = dropdowns.add({stringList:["Unchanged","Grayscale", "RGB", "CMYK"] , minWidth:100, selectedIndex:peuINFO.colorType } );
else
var changeColorMode = dropdowns.add({stringList:["Unchanged","Grayscale", "RGB", "CMYK","PostScript Color Management"] , minWidth:100, selectedIndex:peuINFO.colorType } );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Font embedding:"} );
var changeFontEmbedding = dropdowns.add({stringList:["None","Complete", "Subset"] , minWidth:100, selectedIndex:peuINFO.fontEmbed } );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Type of data to send:"} );
var changeDataToSend = dropdowns.add({stringList:["All","Proxy"] , minWidth:50, selectedIndex:peuINFO.dataSent } );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Data type:"} );
var changeDataType = dropdowns.add({stringList:["Binary","ASCII"] , minWidth:50, selectedIndex:peuINFO.dataType } );
with (dialogRows.add() ){
staticTexts.add({staticLabel:"Perform OPI replacement:"} );
var changeOPIreplace = dropdowns.add({stringList:["No","Yes"] , minWidth:50, selectedIndex:peuINFO.opiReplacement} );
do{
var getOut = true;
if((epsDialog.show()) ){
// Use these to update the prefs file
peuINFO.defaultFlattenerPreset = changeFlattenerPreset.selectedIndex;
peuINFO.ignoreON = (changeIgnoreOverride.checkedState)?1:0;
peuINFO.defaultPreview = changePreviewPreset.selectedIndex;
peuINFO.bleed = changeBleedVal.editContents;
peuINFO.epsON = (changeOpiEPS.checkedState)?1:0;
peuINFO.pdfON = (changeOpiPDF.checkedState)?1:0;
peuINFO.bitmapON = (changeOpiBitmap.checkedState)?1:0;
peuINFO.psLevel = changePSlevel.selectedIndex;
peuINFO.colorType = changeColorMode.selectedIndex;
peuINFO.fontEmbed = changeFontEmbedding.selectedIndex;
peuINFO.dataSent = changeDataToSend.selectedIndex;
peuINFO.dataType = changeDataType.selectedIndex;
peuINFO.opiReplacement = changeOPIreplace.selectedIndex;
// Check if bleed value is OK
peuINFO.bleed = parseFloat(peuINFO.bleed)
if (isNaN(peuINFO.bleed)){
alert("Bleed value must be a number (1.1).");
getOut = false;
peuINFO.bleed = oldBleed;
else if (peuINFO.bleed < 0){
alert("Bleed value must be greater or equal to zero (1.2).");
getOut = false;
peuINFO.bleed = oldBleed;
else {
// Check if bleed is too big
try {
app.epsExportPreferences.bleedBottom = "" + peuINFO.bleed + peuINFO.measureUnitArray[peuINFO.measurementUnits];
catch (Exception){
alert("The bleed value must be less than one of the following: 6 in | 152.4 mm | 432 pt | 33c9.384");
getOut = false;
peuINFO.bleed = oldBleed;
else{
epsDialog.destroy();
byeBye("EPS Export canceled by user.", peuINFO.sayCancel);
}while(!getOut);
// These are used for exporting
epsOptions["defaultFlattenerPreset"] = changeFlattenerPreset.selectedIndex;
epsOptions["ignoreON"] = peuINFO.ignoreON;
epsOptions["defaultPreview"] = changePreviewPreset.selectedIndex;
epsOptions["bleed"] = peuINFO.bleed;
epsOptions["epsON"] = peuINFO.epsON;
epsOptions["pdfON"] = peuINFO.pdfON;
epsOptions["bitmapON"] = peuINFO.bitmapON;
epsOptions["psLevel"] = changePSlevel.selectedIndex;
epsOptions["colorType"] = changeColorMode.selectedIndex;
epsOptions["fontEmbed"] = changeFontEmbedding.selectedIndex;
epsOptions["dataSent"] = changeDataToSend.selectedIndex;
epsOptions["dataType"] = changeDataType.selectedIndex;
epsOptions["opiReplacement"] = changeOPIreplace.selectedIndex;
epsDialog.destroy();
return epsOptions;
* Apply chosen settings to the EPS export prefs
function setEPSoptions(theINFO){
with(app.epsExportPreferences){
appliedFlattererPreset = peuINFO.flattenerNames[theINFO.defaultFlattenerPreset];
bleedBottom = "" + theINFO.bleed + peuINFO.measureUnitArray[peuINFO.measurementUnits];
bleedInside = bleedBottom;
bleedOutside = bleedBottom;
bleedTop = bleedBottom;
epsSpreads = currentINFO.doSpreadsON;
ignoreSpreadOverrides = theINFO.ignoreON;
switch (theINFO.dataType){
case 0:
dataFormat = DataFormat.binary;
break;
case 1:
dataFormat = DataFormat.ascii;
break;
switch (theINFO.colorType){
case 0:
epsColor = EPSColorSpace.unchangedColorSpace;
break;
case 1:
epsColor = EPSColorSpace.gray;
break;
case 2:
epsColor = EPSColorSpace.rgb;
break;
case 3:
epsColor = EPSColorSpace.cmyk;
break;
case 4:
epsColor = EPSColorSpace.postscriptColorManagement;
break;
switch (theINFO.fontEmbed){
case 0:
fontEmbedding = FontEmbedding.none;
break;
case 1:
fontEmbedding = FontEmbedding.complete;
break;
case 2:
fontEmbedding = FontEmbedding.subset;
break;
switch (theINFO.dataSent){
case 0:
imageData = EPSImageData.allImageData;
break;
case 1:
imageData = EPSImageData.proxyImageData;
break;
switch (theINFO.defaultPreview){
case 0:
preview = PreviewTypes.none;
break;
case 1:
preview = PreviewTypes.tiffPreview;
break;
case 2:
preview = PreviewTypes.pictPreview;
break;
switch (theINFO.psLevel){
case 0:
postScriptLevel = PostScriptLevels.level2;
break;
case 1:
postScriptLevel = PostScriptLevels.level3;
break;
// Setting these three to false prevents a conflict error when trying to set the opiImageReplacement value
omitBitmaps = false;
omitEPS = false;
omitPDF = false;
if (theINFO.opiReplacement){
opiImageReplacement = true;
else {
opiImageReplacement = false;
omitBitmaps = theINFO.bitmapON;
omitEPS = theINFO.epsON;
omitPDF = theINFO.pdfON;
* Build the main dialog box
function createMainDialog (docName, thisNum, endNum){
var theDialog = app.dialogs.add({name:(VERSION_NAME + ": Enter the options for \"" + docName + "\"" + ((endNum==1)?"":" (" + thisNum + " of " + endNum + " documents)") )
, canCancel:true} );
with (theDialog){
// Left Column
with (dialogColumns.add() ){
with (dialogRows.add() )
staticTexts.add({staticLabel:"Page Naming Options"} );
with(borderPanels.add() ){
with(dialogColumns.add() ){
// Radio butons for renaming convention
namingConvention = radiobuttonGroups.add();
with(namingConvention){
radiobuttonControls.add({staticLabel:"As Is", checkedState:(peuINFO.nameConvType == 0)} );
radiobuttonControls.add({staticLabel:"Add \".L\"", checkedState:(peuINFO.nameConvType == 1)} );
radiobuttonControls.add({staticLabel:"Odd/Even = \".F/.B\"", checkedState:(peuINFO.nameConvType == 2)} );
radiobuttonControls.add({staticLabel:"Odd/Even = \"LA.F/LA.B\"" , checkedState:(peuINFO.nameConvType == 3)});
radiobuttonControls.add({staticLabel:"Numeric Override" , checkedState:(peuINFO.nameConvType == 4)});
with (dialogRows.add() )
staticTexts.add({staticLabel:""} );
with(dialogRows.add() ){
staticTexts.add({staticLabel:"Base Name"} );
newBaseName = textEditboxes.add({editContents:baseName, minWidth:100} );
// Middle Column
with (dialogColumns.add() ){
with (dialogRows.add() )
RichardM0701, are you the same person as scottbentley?
I fear your response doesn't make a lot of sense. The easiest way is to put your files into a book, which will not harm them. The easiest script is often no script at all.
I don't understand your comment about separate text boxes, files contain text boxes and definitionally multiple files means multiple separate text boxes.
I do not know what you are referring to with respect to the toc update, perhaps you could reference a particular command or script, or provide a screenshot.
It sounds like now you are interested in updating multiple tables of contents in a single fell swoop -- this would appear to have nothing to do with the original question. If that's the case, please start a new thread. -
Analysing Task Audit, Data Audit and Process Flow History
Hi,
Internal Audit dept has requested a bunch of information, that we need to compile from Task Audit, Data Audit and Process Flow History logs. We do have all the info available, however not in a format that allows proper "reporting" of log information. What is the best way to handle HFM logs so that we can quickly filter and export required audit information?
We do have housekeeping in place, so the logs are partial "live" db tables, and partial purged tables that were exported to Excel to archive the historical log info.
Many Thanks.I thought I posted this Friday, but I just noticed I never hit the 'Post Message Button', ha ha.
This info below will help you translate some of the information in the tables, etc. You could report on it from the Audit tables directly or move them to another appropriate data table for analysis later. The concensus, though I disagree, is that you will suffer performance issues if your audit tables get too big, so you want to move them periodically. You can do this using a scheduled Task, manual process, etc.
I personally just dump it to another table and report on it from there. As mentioned above, you'll need to translate some of the information as it is not 'human readable' in the database.
For instance, if I wanted to pull Metadata Load, Rules Load, Member List load, you could run a query like this. (NOTE: strAppName should be equal to the name of your application .... )
The main tricks to know at least for task audit table are figuring out how to convert times and determing which activity code corresponds to the user friendly name.
-- Declare working variables --
declare @dtStartDate as nvarchar(20)
declare @dtEndDate as nvarchar(20)
declare @strAppName as nvarchar(20)
declare @strSQL as nvarchar(4000)
-- Initialize working variables --
set @dtStartDate = '1/1/2012'
set @dtEndDate = '8/31/2012'
set @strAppName = 'YourAppNameHere'
--Get Rules Load, Metadata, Member List
set @strSQL = '
select sUserName as "User", ''Rules Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
from ' + @strAppName + '_task_audit ta, hsv_activity_users au
where au.lUserID = ta.ActivityUserID and activitycode in (1)
and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
union all
select sUserName as "User", ''Metadata Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
from ' + @strAppName + '_task_audit ta, hsv_activity_users au
where au.lUserID = ta.ActivityUserID and activitycode in (21)
and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
union all
select sUserName as "User", ''Memberlist Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
from ' + @strAppName + '_task_audit ta, hsv_activity_users au
where au.lUserID = ta.ActivityUserID and activitycode in (23)
and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + ''''
exec sp_executesql @strSQLIn regards to activity codes, here's a quick breakdown on those ....
ActivityID ActivityName
0 Idle
1 Rules Load
2 Rules Scan
3 Rules Extract
4 Consolidation
5 Chart Logic
6 Translation
7 Custom Logic
8 Allocate
9 Data Load
10 Data Extract
11 Data Extract via HAL
12 Data Entry
13 Data Retrieval
14 Data Clear
15 Data Copy
16 Journal Entry
17 Journal Retrieval
18 Journal Posting
19 Journal Unposting
20 Journal Template Entry
21 Metadata Load
22 Metadata Extract
23 Member List Load
24 Member List Scan
25 Member List Extract
26 Security Load
27 Security Scan
28 Security Extract
29 Logon
30 Logon Failure
31 Logoff
32 External
33 Metadata Scan
34 Data Scan
35 Extended Analytics Export
36 Extended Analytics Schema Delete
37 Transactions Load
38 Transactions Extract
39 Document Attachments
40 Document Detachments
41 Create Transactions
42 Edit Transactions
43 Delete Transactions
44 Post Transactions
45 Unpost Transactions
46 Delete Invalid Records
47 Data Audit Purged
48 Task Audit Purged
49 Post All Transactions
50 Unpost All Transactions
51 Delete All Transactions
52 Unmatch All Transactions
53 Auto Match by ID
54 Auto Match by Account
55 Intercompany Matching Report by ID
56 Intercompany Matching Report by Acct
57 Intercompany Transaction Report
58 Manual Match
59 Unmatch Selected
60 Manage IC Periods
61 Lock/Unlock IC Entities
62 Manage IC Reason Codes
63 Null -
CMSDK import/export utility
Can the 9i CMSDK import/export utility be used to export the content of the an iFS 1.1.6 schema to subsequently import into a 9i database? Once imported into the 9i database, then migrate/upgrade the 9i iFS schema to CMSDK 9 for testing purposes?
Thank you for your advice Luis.
What I really need to be able to do is keep my iFS 1.1.6/RDBMS8.1.7.4 development environment intact, while creating a CMSDK/Oracle9i environment for regression testing. I have multiple iFS schemas (6) in our 8.1.7.4 dev database (source). And, what I would like to do is copy one of the iFS environments (including Metadata) out of source and move it into the CMSKD/Oracle9i environment(target). Both environments need to remain accessible, and the source environment needs to remain at version iFS1.1.6 / 8.1.7.4 for regression testing.
Here's a list of actions that I have taken:
1. install oracle9i rdbms software.
2. create 9i database from scratch.
3. install 9iAS software
4. install CMSDK software in 9iAS root.
5. export one iFS schema out of the dev source database. (Using the plain RDBMS export utility)
6. import this schema into the newly created Oracle9i database.
7. Run CMSDK configuration tool to upgrade the imported iFS 1.1.6 schema. <== FAILED HERE!!!!
I run into a problem at the start of the upgrade, when using the configuration tool. It complains that the credential manager username IFSSYS$CM does not exist and fails. So, I simply created a user called IFSSYS$CM in the 9i target database to see if this would allow the configuration tool upgrade process to progress and the result was another error of "Insufficient Privileges". How can manually set a the crediential manager so that the imported iFS schema can be upgraded successfully?
I realize that be doing an in-place upgrade of my entire Oracle 8i source database. I will be able to migrate both the RDBMS and iFS using the installation/configuration tools.
If there is a limitation of not being able to copy (credential manager metadata) an iFS 1.1.6 schema out of the source and into a target database, and then upgrading/migrating the schema to CMSDK/Oracle9i. Then I may be forced into cloning my entire development instance (giving it a different name) and then doing in-place migration of the entire cloned database (all 6 iFS schemas). Our source dev database is 18G and includes much more (probably garbage data) than will be needed for our regression testing. My hope was to take only a subset of the 18G (1 of the 6 iFS schemas) and move it into CMSDK/Oracle9i to perform the testing. We are very low on available disk space and taking only a subset would be a more efficient use.
I have even attempted to use the CMSDK export utility to export the iFS1.1.6 schema. I was hoping this would export the iFS1.1.6 schema, metadata, contents, and credential manager. This was not successful due to missing Java packages not installed in the Oracle8i (8.1.7.4 database).
These are all the details of my dilemma. Can you help guide me in the right direction. I've spent a lot time already fumbling through and have had a number of false starts with my proposed approach. I need someone to tell me the recommended way of approaching my issue. Thanks again. -
How to define table specific QUERY in parameter file for Export Utility
Hi All
I am trying to create A SINGLE parameter file for export utility. I have 2 tables and want to use a different QUERY condition for each of these tables. However i do not want to create 2 seperate parameter files for this and run/script exp twice.
Is there a way to do it ?
NOTE: I am using normal export utility for Version 9 and NOT oracle data pump.
Following is an example of what i want to achive in parameter file:
Tables = (TabA,TabB)
# for table - TabA
QUERY = "where id =25"
#for table - TabB
QUERY = " where court_id=54 and id >1"
Please advice on syntax of parameter file so that exp knows that it has to process 1st condition for only TabA and second condition for TabB.
Any samples/syntax examples are greately appreciated.
thanks in advance!
cool techNo there is no such alternative. Query condition is applicable to all the tables listed in TABLES parameter.
http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1005843 -
Is any way to install only Export utility without installing oracle
how in can install only Export utility in windows server or in windows xp
and if not is it possible to use Export utility in oracle 10g to export data from oracle 9iThe general rule of thumb is
1. To export data from upper version to lower version, you need to export/import using that lower versions of exp and imp utility.
2. To export data from lower version to upper version, you need to export/import using that upper versions of exp and imp utility.
http://docs.oracle.com/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2852
I think it may be possible to install database utilities w/o installing full S/W. Please note that I have not tried this approach
thanks
Maybe you are looking for
-
Office Web Apps 2013 + could not establish trust relationship
We currently have a three tier SharePoint 2013 Farm: 1. Web Front End Server (Server 2008 R2 Enterprise) - Servername: TEST2SP013.domain.dom 2. Central Admin Server (Server 2008 R2 Enterprise) - Servername: TEST2SPCA013.domain.dom 3. SQL Server (Serv
-
Installed Itunes 11.0 on Windows 7 64-bit / Itunes store crashes app completely
Hello I Installed Itunes 11.0 on Windows 7 64-bit and I can download podcasts and it will sync, but after downloading app updates and clicking done, Itunes crashes immediately when it tries to access the itunes store. If I try to access the Itun
-
Using LiveCycle Designer ES and Reader 9. Very simple form but the form fields aren't functional when previewed in LC Designer. They work fine in Reader or Acrobat but not in preview...... Thanks Mike
-
Usually there is a upgrade path for current users of the software at a reduced price. I dont see anything like that for Lightroom 6. Will I have to pay full price just to upgrade to the new LR6 standalone version???
-
Equium A80 - Black and white when plugging into PAL TV
I know this seems to be a common problem but I have searched this forum and it seems the soloution is to select PAL or a screen refresh rate of 50Hz. My Equium a80 has no option for PAL and the only screen refresh rate is 60Hz. Can anyone help?? Plea