Clarifications to migrate to transformations
Dear Team,
I want some clarifications.
I am doing upgradation project. (BW 3.1 to BI7).
Customer is asking me to migrate all existing transferrules/updaterules to transformations. what needs to do ?. Is it possible to migrate all existing transfer rules/update rules at a time?. Is there any programs to achieve this?.
Best Regards,
SG
Hi SG,
some of the useful links from SDN related to migrations
Re: Datasource Identification
Strategy toward new and 3.x datasource / infosource
Replicate NEW/3.x datasource - Methodology
Replicate NEW/3.x datasource - Change back to 3.x datasource Methodology
Assessing Migration Scenario's
Re: Few questions on BC activation..
Transformation Rules
hope they help
Similar Messages
-
Error in Routine while migrating standard Transformations from 3.5 to BI7.0
Hi Experts,
We are migrating the Standard trasformations, from old version to new BI 7.0 version. when trying to create the new transformation we are getting a routine error, and unable activate the transformation.
Trasformation Name: TRCS ZCO_OM_NAE_1 -> CUBE 0PS_C08
Routine Desc.: Conversion of Actual / Commitment / Plan to Resid.Order Plan
Source Fields: 0CURRENCY & 0FISCPER
Target Fields: 0AMOUNT & 0CURRENCY
Error Message: E:Field "COMM_STRUCTURE" is unknown. It is neither in one of the specified tables nor defined by a "DATA" statement. "DATA" statement.
Routine:
PROGRAM trans_routine.
CLASS routine DEFINITION
CLASS lcl_transform DEFINITION.
PUBLIC SECTION.
Attributs
DATA:
p_check_master_data_exist
TYPE RSODSOCHECKONLY READ-ONLY,
*- Instance for getting request runtime attributs;
Available information: Refer to methods of
interface 'if_rsbk_request_admintab_view'
p_r_request
TYPE REF TO if_rsbk_request_admintab_view READ-ONLY.
PRIVATE SECTION.
TYPE-POOLS: rsd, rstr.
Rule specific types
$$ begin of global - insert your declaration only below this line -
... "insert your code here
$$ end of global - insert your declaration only before this line -
ENDCLASS. "routine DEFINITION
$$ begin of 2nd part global - insert your code only below this line *
$$ end of rule type
TYPES:
BEGIN OF tys_TG_1_full,
InfoObject: 0CHNGID Change Run ID.
CHNGID TYPE /BI0/OICHNGID,
InfoObject: 0RECORDTP Record type.
RECORDTP TYPE /BI0/OIRECORDTP,
InfoObject: 0REQUID Request ID.
REQUID TYPE /BI0/OIREQUID,
InfoObject: 0FISCVARNT Fiscal year variant.
FISCVARNT TYPE /BI0/OIFISCVARNT,
InfoObject: 0FISCYEAR Fiscal year.
FISCYEAR TYPE /BI0/OIFISCYEAR,
InfoObject: 0CURRENCY Currency key.
CURRENCY TYPE /BI0/OICURRENCY,
InfoObject: 0CO_AREA Controlling area.
CO_AREA TYPE /BI0/OICO_AREA,
InfoObject: 0CURTYPE Currency Type.
CURTYPE TYPE /BI0/OICURTYPE,
InfoObject: 0METYPE Key Figure Type.
METYPE TYPE /BI0/OIMETYPE,
InfoObject: 0VALUATION Valuation View.
VALUATION TYPE /BI0/OIVALUATION,
InfoObject: 0VERSION Version.
VERSION TYPE /BI0/OIVERSION,
InfoObject: 0VTYPE Value Type for Reporting.
VTYPE TYPE /BI0/OIVTYPE,
InfoObject: 0WBS_ELEMT Work Breakdown Structure Element (WBS Elem
*ent).
WBS_ELEMT TYPE /BI0/OIWBS_ELEMT,
InfoObject: 0COORDER Order Number.
COORDER TYPE /BI0/OICOORDER,
InfoObject: 0PROJECT Project Definition.
PROJECT TYPE /BI0/OIPROJECT,
InfoObject: 0ACTIVITY Network Activity.
ACTIVITY TYPE /BI0/OIACTIVITY,
InfoObject: 0NETWORK Network.
NETWORK TYPE /BI0/OINETWORK,
InfoObject: 0PROFIT_CTR Profit Center.
PROFIT_CTR TYPE /BI0/OIPROFIT_CTR,
InfoObject: 0COMP_CODE Company code.
COMP_CODE TYPE /BI0/OICOMP_CODE,
InfoObject: 0BUS_AREA Business area.
BUS_AREA TYPE /BI0/OIBUS_AREA,
InfoObject: 0ACTY_ELEMT Network Activity Element.
ACTY_ELEMT TYPE /BI0/OIACTY_ELEMT,
InfoObject: 0STATUSSYS0 System Status.
STATUSSYS0 TYPE /BI0/OISTATUSSYS0,
InfoObject: 0PS_OBJ PS Object Type.
PS_OBJ TYPE /BI0/OIPS_OBJ,
InfoObject: 0VTSTAT Statistics indicator for value type.
VTSTAT TYPE /BI0/OIVTSTAT,
InfoObject: 0AMOUNT Amount.
AMOUNT TYPE /BI0/OIAMOUNT,
Field: RECORD Data record number.
RECORD TYPE RSARECORD,
END OF tys_TG_1_full.
Additional declaration for update rule interface
DATA:
MONITOR type standard table of rsmonitor WITH HEADER LINE,
MONITOR_RECNO type standard table of rsmonitors WITH HEADER LINE,
RECORD_NO LIKE SY-TABIX,
RECORD_ALL LIKE SY-TABIX,
SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS.
global definitions from update rules
TABLES: ...
DATA: ...
FORM routine_0001
CHANGING
RETURNCODE LIKE sy-subrc
ABORT LIKE sy-subrc
RAISING
cx_sy_arithmetic_error
cx_sy_conversion_error.
init variables
not supported
icube_values = g.
CLEAR result_table. REFRESH result_table.
type-pools: PSBW1.
data: l_psbw1_type_s_int1 type psbw1_type_s_int1.
data: lt_spread_values type PSBW1_TYPE_T_ACT_SPREAD.
field-symbols: .
füllen Rückgabetabelle !
move-corresponding to RESULT_TABLE.
check not RESULT_TABLE-amount is initial.
append RESULT_TABLE.
endloop.
if the returncode is not equal zero, the result will not be updated
RETURNCODE = 0.
if abort is not equal zero, the update process will be canceled
ABORT = 0.
ENDFORM. "routine_0001
$$ end of 2nd part global - insert your code only before this line *
CLASS routine IMPLEMENTATION
CLASS lcl_transform IMPLEMENTATION.
*$*$ begin of routine - insert your code only below this line *-*
Data:
l_subrc type sy-tabix,
l_abort type sy-tabix,
ls_monitor TYPE rsmonitor,
ls_monitor_recno TYPE rsmonitors.
REFRESH:
MONITOR.
Runtime attributs
SOURCE_SYSTEM = p_r_request->get_logsys( ).
Migrated update rule call
Perform routine_0001
CHANGING
l_subrc
l_abort.
*-- Convert Messages in Transformation format
LOOP AT MONITOR INTO ls_monitor.
move-CORRESPONDING ls_monitor to MONITOR_REC.
append monitor_rec to MONITOR.
ENDLOOP.
IF l_subrc <> 0.
RAISE EXCEPTION TYPE cx_rsrout_skip_val.
ENDIF.
IF l_abort <> 0.
RAISE EXCEPTION TYPE CX_RSROUT_ABORT.
ENDIF.
$$ end of routine - insert your code only before this line -
ENDMETHOD. "compute_0AMOUNT
Method invert_0AMOUNT
This subroutine needs to be implemented only for direct access
(for better performance) and for the Report/Report Interface
(drill through).
The inverse routine should transform a projection and
a selection for the target to a projection and a selection
for the source, respectively.
If the implementation remains empty all fields are filled and
all values are selected.
METHOD invert_0AMOUNT.
$$ begin of inverse routine - insert your code only below this line-
... "insert your code here
$$ end of inverse routine - insert your code only before this line -
ENDMETHOD. "invert_0AMOUNT
ENDCLASS. "routine IMPLEMENTATION
Regards
Krishanu.Hi,
Go through the belowl link it may help you a lot
/message/7377688#7377688 [original link is broken]
Regards,
Marasa. -
Clarification on migration from iMac G5 running 10.4.11 to new intel iMac
Hello,
Although a lot of you have already posted on this subject, I just wanted to get a clarification, please. I just got a new intel iMac and I would like to transfer all my files from my iMac G5 (10.4.11). Should I just use the Migration Assistant and that's it? Will the two OS conflict with each other? And also, is it possible to continue to use 10.4.11 on my new iMac?
Thank you very much! I hope Ms. Miriam reads this one. She seems to be very knowledgeable!
-PedroNot Miriam, but...
You can use Migration Assistant to go from a Tiger Mac to a Leopard Mac. You will be running the Migration Assistant utility on the new iMac. You might employ FireWire Target Disk Mode (on the old iMac) as part of the process, to connect the two Macs.
http://support.apple.com/kb/HT1661
A new iMac has one FireWire 800 port and no FireWire 400 port (which is the type on the iMac G5). So you will need to get a 9-pin (800) to 6-pin (400) adapter, or a cable that has an 800 end and a 400 end (if you want to use the FireWire method for connection). They are available here
http://eshop.macsales.com/item/Other%20World%20Computing/FIR1369AD/
http://eshop.macsales.com/item/Newer%20Technology/1394B96036/
You can also use a wired or wireless network connection between the two Macs, to do the data transfer.
And also, is it possible to continue to use 10.4.11 on my new iMac?
No. Macs can generally run the OS that came with it, or LATER (up to hardware limitations), but not earlier. So if the new iMac had 10.5.6 pre-installed, it cannot run any earlier version of Mac OS X. -
Error while Migrating the custom routines in Transformations
Dear All,
I am in the process of migrating BW 3.5 to BI 7.0.I migrated the Standard cubes and DSO's from BW3.5 to BI 7.0 flow successfully.
But while migrating the transformations which are having the custom routines,I am facing the below errors.
The data object "COMM_STRUCTURE" does not have a component called BIC/ZGROSSPRI".But the routine contains BIC/ZGROSSPRI.
I tried to change the BW 3.5 terminology to BI 7.0 terminology.(Like COMM_STRUCTURE replaced by SOURCE_FIELDS).But unable to solve.There are nearly 20 custome routines written in all transformations.
Can any one guide me who faced the same tyepe of problem?
Thanks & Regards,
DinakarHI,
We need to include Source and Target see the below article.
http://wiki.sdn.sap.com/wiki/display/profile/Surendra+Reddy
How to Correct Routines in Transformations
http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/business-intelligence/g-i/how%20to%20correct%20routines%20in%20transformations.pdf
Thanks
Reddy -
Data Migration - Data Models Dependency
Hi,
In major ERP data migration and transformation project from legacy to SAP or from SAP to SAP, one should finalize the sequence in which data should be migrated. e.g. Vendor data cannot be migrated before customer data, or tax data should be uploaded first then customer data .. etc. Is there available any best practice or reference for such dependencies ? Primary models and secondary models which depend on primary models ?
Please share such content if you have, I'm sure it will be helpful to many.The best way is to use iTunes. Back up your old iPad. Unplug it, plug your new one in and go through the set up process. When you get the question 'set up as new or restore from backup' choose to restore from the backup you just made. Your app data and information will be put onto your new iPad.
You may need to redownload apps, but since the app data is already there, you should not lose progress on games or the such.
http://support.apple.com/en-us/HT2109
has directions. You can also use iCloud backup, but I personally prefer iTunes since it takes the internet out of the equation and things are done locally (beyond redownloading apps if needed) -
Transformations for 0IC_C03
Hi Guys,
We are doing a new BI 7.0 implementation and we are on Support pak 13.
In BW3.5,for the 0IC_C03(2lis_03_bf/um) cube we have lot of complicated update rules.
As in BI 7.0 we are using transformations instead of Transfer/Update rules, how do i get all those complex update rules.
Please do advise me from the below three options or do advise.
1.Do i need to copy the code from some other system and paste them manually into my transformations
2.Is it better to go with Transfer / Update rules instead of Transformations
3.First do Transfer / update rules and then migrate to Transformations.
Please do advise.
Thank you,
PVKHi PVK,
What we are following in my project
Before...migrating Datasource and starting development.
1) First activate business content of 0IC_C03 (Update Rule and IS and Transfer Rule).
2) Make a doc which will work as backup for Update Rule and Transfer Rule Mapping.
(Take a Backup of BW 3.5)
3) Then Migrate your DS, with Export, and storing that in one Transport request.
4) Then Create a Transformation from DS to Infosource(Z* template as 2lis_02_BF) and write all transfer rule Routines manually and convert them into OO Objects refering to code(back from Doc) .
5) The Create a Transformation from Infosource to Cube 0IC_C03 and write all update rule Routines manually and convert them into OO Objects refering to code(back from Doc) .
6) Then Create a DTP to Load from DS to 0IC_C03.
Making a Doc(step2) is very tedious, but we have to do it.
Regards,
San! -
Transform Update/transfer rules into DTP in Standard Business Content
Hi all,
I have a new installation of BI 7.0, and we are massively going to use Standard BC delivered by SAP.
I received an input from to transform all Update/Transform rule contained in the Standard BC, into DTP.
First of all I noticed that in the area of interest for our scope the DTP delivered with the standard content are a very small percentage of the all Standard BC (area HCM, FIN, SC). Is it possible?
Second I have to perform ana analysis on impact/benefits/consequence of performing this transformation. We tried already, but we had some errors, and we had to stop.
Can you please help me?
Many Many Thanks
ValentinaTR/UR will be migrated to Transformations not DTP - DTP is jst like an Infopackage which loads data from PSA to Datatargets.
Migrate TR/UR before you Migrate datasources.
Migrate datasources with WITH EXPORT in RSDS.
Change the Routines accordingly to OO ABAP.
There is no standard BC for Transformations and DTP's. -
Importing public Transformation into global explorer OWB 11G
Hi,
I have around 80 public transformation (functions, proc, packages) in OWB 10GR1. I am migrating all my MAPS to 11G. I am not able to migrate Public transformations that are used across the modules in 10G. How do I migrate them. Public transformations exist in the Global Explorer in 11G I can not import them from the database either coz there is no import option available when I Right click on the function node or the procedure node in the global explorer.
Thanks.Do you have the transforms in a 10gR1 MDL, import via that, or import into the project tree and then copy-paste to the globals area.
Cheers
David -
Hi
I migrated the transfer rules from 3.5 to 7.0. The transformations are created by default with all the routines copies. But in transformation, All my routines are showing under the global routines.
For ex: In update rule key figure 1 has routine u201CABCu201D and key figure 2 has routine u201CDEFu201D and key figure 3 has routine u201CGHIu201D
After migration, in transformation key figure 1 has routine u201CABCu201D, u201CDEFu201D, u201CGHIu201D in global routine.
I donu2019t understand what is happening. Can anyone give me understand on what happens after migration?
Regards
AnnieHi Annie,
Please refere the following SAP note:
SAP Note 912841 Migration help
Link: https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler
& Migrating transfer rules and update rules for BW7.x
SAP Note Number: 1052648
Link:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler
Hope it helps.
Thanks & Regards,
Rashmi. -
Error while scheduling 2lis_11_Vaitm
Hi Gurus,
I am getting the following error, when i am scheduling DS "2lis_11_Vaitm".
Operation could not be carried out for
Transformation does not exist (see long text)
Transformation does not exist (see long text)
Postprocessing PSA /BIC/B0000123 failed (rc = 1)
Error when activating DataSource 2LIS_11_VAITM S01LO100
Operation could not be carried out for
Transformation does not exist (see long text)
Transformation does not exist (see long text)
Postprocessing PSA /BIC/B0000123 failed (rc = 1)
Error when resetting DataSource 2LIS_11_VAITM S01LO100 to the active version
Operation could not be carried out for
Transformation does not exist (see long text)
Transformation does not exist (see long text)
Postprocessing PSA /BIC/B0000123 failed (rc = 1)
Error when activating DataSource 2LIS_11_VAITM S01LO100
Operation could not be carried out for
Transformation does not exist (see long text)
Transformation does not exist (see long text)
Postprocessing PSA /BIC/B0000123 failed (rc = 1)
Error when resetting DataSource 2LIS_11_VAITM S01LO100 to the active version
Please help me. Thanks in advance.
Thanks & Regards
Manna DasHi,
Please refer to following notes:
0001142908 70SP19: BDLS does not convert DTPs
0001266603 Re-importing into source system deletes routines & formulas
0001292311 Extraction monitor probs. for transformations from migration
0001369395 Transformations generated from migration: Repair tools
0001432815 BDLS does not convert DTP: See description for constraints
Thanks & Regards,
Vipin -
Upgrade service error after ZFD3 SP3 to ZFD4.01 upgrade
Hi,
Having upgraded from ZFD3.2 SP3 to ZFD4.01 and carefully following the
documentation instructions I still get the following error on the Inventory
Service startup screen:
"The Upgrade Service is migrating the schema
An error has occurred. The upgrade service is exiting now."
The Inv Srv object Server Status log says:
"1178: An error occured while performing the DBSchema migration
1177: Unable to complete all the operations, the upgrade service is exiting
with errors.
106: The Storer succesfully connected to the database
108:the database in not initialized as it is being upgraded"
All of these errors on the support site have as a fix either restart the
Inventory Service or contact Novell.
I've spent some hours now looking for a fix on Knowledgebase and in the Zen
forums but can't find anything useful. This is the second time I've run the
upgrade and this problem has occurred. I've reinstalled Zen3.2 from scratch,
applied SP1 and SP3.
I've checked policies and database objects, all appears as it should be.
These seem to be the relevant errors from the log files:
Migration Log:
Migration Exception Message->Could not add the dbspace..ASA Error -121:
Permission denied: unable to create the dbspace file "CIM10",
SQLState->42501, ErrorCode->262
Zenworksinvservice log file:
[12/2/04 12:37:04.691] ZenInv - Upgrade Service: migrateDbToP1(true) failed
in AlterProcedure
[12/2/04 12:37:05.286] ZenInv - Upgrade Service: Upgrade to ZfD4.0 or ZFS3.0
SP1 failed. Reinstall ZFD4.0 or ZFS3.0 SP1 and start the inventory server
again
See full logs below.
Is there no option but to go straight to Novell?
Regards,
Steve Law
zenworksinvservice-041202123653303.txt
[12/2/04 12:36:53.280] ZENInv - Service Loader: getAttributesofDN:
Attributes of:
[12/2/04 12:36:53.349] ZENInv - Service Loader:
getAttributes:Attributes={zeninvhostserver=zeninvH ostServer: Distinguished
Name CN=SERVER1.OU=BMO.OU=UK.O=OURTREE, zeninvrole=zeninvRole: Integer 25}
[12/2/04 12:36:53.358] ZENInv - Service Loader: Logout from NDS returnedtrue
[12/2/04 12:36:53.725] ZENInv - Server Config: The severconfig service start
is pending...
[12/2/04 12:36:53.784] ZENInv - Server Config: getAttributesofDN: Attributes
of:
[12/2/04 12:36:53.827] ZENInv - Server Config:
getAttributes:Attributes={zeninvscanfilepath=zenin vScanFilePath: Case Ignore
String \\SERVER1\DATA\Zenworks\ScanDir, zeninvrole=zeninvRole: Integer 25,
zeninvcomponentstatus=zeninvComponentStatus: 4.0 12 0}
[12/2/04 12:36:53.832] ZENInv - Server Config: Logout from NDS returnedtrue
[12/2/04 12:36:53.838] ZENInv - Server Config: Getting Cascade INit time
[12/2/04 12:36:54.844] ZENInv - Server Config: Database Location Policy DN
is CN=Zen32 Server Package:General:ZENworks
Database.OU=Policies.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:36:54.887] ZENInv - Server Config: getAttributesofDN: Attributes
of:
[12/2/04 12:36:54.918] ZENInv - Server Config:
getAttributes:Attributes={zenlocdatabaseobject=zen locDatabaseObject:
Distinguished Name CN=SERVER1_invDatabase.OU=BMO.OU=UK.O=OURTREE}
[12/2/04 12:36:54.923] ZENInv - Server Config: Logout from NDS returnedtrue
[12/2/04 12:36:54.967] ZENInv - Server Config: getAttributesofDN: Attributes
of:
[12/2/04 12:36:55.001] ZENInv - Server Config:
getAttributes:Attributes={zendbpassword=zendbPassw ord: novell,
zendbjdbcprotocol=zendbJDBCProtocol: jdbc:,
zendbdatabasetype=zendbDatabaseType: Integer 0,
zendbjdbcsidservice=zendbJDBCSIDService: 11.32.25.98, zendburl=zendbURL:
Case Ignore String
jdbc:sybase:Tds:11.32.25.98:2638?ServiceName=mgmtd b&JCONNECT_VERSION=4,
zendbjdbcsubname=zendbJDBCSubName: Tds:, zendbjdbcdriver=zendbJDBCDriver:
com.sybase.jdbc.SybDriver, host server=Host Server: Distinguished Name
CN=SERVER1.OU=BMO.OU=UK.O=OURTREE, network address=Network Address: 13;20;,
zendbuser=zendbUser: MW_DBA, zendbjdbcsubprotocol=zendbJDBCSubProtocol:
sybase:, zendbjdbcport=zendbJDBCPort: 2638}
[12/2/04 12:36:55.013] ZENInv - Server Config: Logout from NDS returnedtrue
[12/2/04 12:36:56.297] ZENInv - Server Config: The severconfig service is
started...
[12/2/04 12:36:56.320] ZENInv - Sync Scheduler: The scheduler service start
is pending...
[12/2/04 12:36:56.382] ZENInv - Sync Scheduler: getAttributesofDN:
Attributes of:
[12/2/04 12:36:56.459] ZENInv - Sync Scheduler:
getAttributes:Attributes={zeninvsyncrepeatinterval =zeninvSyncRepeatInterval:
>
[12/2/04 12:36:56.463] ZENInv - Sync Scheduler: Logout from NDS returnedtrue
[12/2/04 12:36:56.585] ZENInv - Sync Scheduler: This is the schedule
fetchedRun Daily: Monday, Tuesday, Wednesday, Thursday, Friday From 00:00:00
To 22:55:00 (Randomly dispatch during time period)
[12/2/04 12:36:56.597] ZENInv - Sync Scheduler: fetchSchedule returned: true
[12/2/04 12:36:56.597] ZENInv - Sync Scheduler: Schedule Fired
[12/2/04 12:36:56.597] ZENInv - Sync Scheduler: The scheduler service is
started...
[12/2/04 12:36:56.721] ZenInv - Upgrade Service: getAttributesofDN:
Attributes of:
[12/2/04 12:36:56.752] ZenInv - Upgrade Service:
getAttributes:Attributes={zeninvcomponentstatus=ze ninvComponentStatus: 4.0
12 0}
[12/2/04 12:36:56.756] ZenInv - Upgrade Service: Logout from NDS
returnedtrue
[12/2/04 12:36:56.758] ZenInv - Upgrade Service: Info : Read
zenInvComponentStatus = 4.0 12 0
[12/2/04 12:36:56.758] ZenInv - Upgrade Service: Inventory Server
zenInvComponentStatus = 4.0 12 0
[12/2/04 12:36:56.789] Service Manager: start(ServiceDataAccessor, String[])
not found in
'com.novell.zenworks.desktop.inventory.strconverte r.STRConverterServiceInit'
[12/2/04 12:36:56.815] ZENInv - STRConverter: Start with
'ServiceStatusChangeListener' invoked
[12/2/04 12:37:04.691] ZenInv - Upgrade Service: migrateDbToP1(true) failed
in AlterProcedure
[12/2/04 12:37:04.807] ZENInv - Status Reporting: Messages are written into
XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:04.845] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:04.881] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-O j,
"- l, "Tw s, "Ty T, "T j, "T-
l, " s, " T, "6 j, "9Y l}
[12/2/04 12:37:04.898] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:04.912] ZENInv - Status Reporting: Number of records to add
are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:04.951] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:04.984] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-O j,
"- l, "Tw s, "Ty T, "T j, "T-
l, " s, " T, "6 j, "9Y l}
[12/2/04 12:37:04.990] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:04.991] ZENInv - Status Reporting: Adding record 0 for
DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.113] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.209] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.246] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:05.277] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "- l,
"Tw s, "Ty T, "T j, "T- l, "
s, " T, "6 j, "9Y l, "~; s}
[12/2/04 12:37:05.284] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.285] ZENInv - Status Reporting: Number of modified records
are: 0 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.286] ZenInv - Upgrade Service: Upgrade to ZfD4.0 or ZFS3.0
SP1 failed. Reinstall ZFD4.0 or ZFS3.0 SP1 and start the inventory server
again
[12/2/04 12:37:05.288] ZENInv - Status Reporting: Messages are written into
XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.326] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:05.358] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "- l,
"Tw s, "Ty T, "T j, "T- l, "
s, " T, "6 j, "9Y l, "~; s}
[12/2/04 12:37:05.364] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.365] ZENInv - Status Reporting: Number of records to add
are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.403] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:05.435] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "- l,
"Tw s, "Ty T, "T j, "T- l, "
s, " T, "6 j, "9Y l, "~; s}
[12/2/04 12:37:05.441] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.445] ZENInv - Status Reporting: Adding record 0 for
DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.568] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.670] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.707] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:05.738] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw s,
"Ty T, "T j, "T- l, " s, "
T, "6 j, "9Y l, "~; s, "s? T}
[12/2/04 12:37:05.747] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:05.748] ZENInv - Status Reporting: Number of modified records
are: 0 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables, Running EnumModifier in integrated mode with
upgradeService
[12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables, dbDriver:com.sybase.jdbc.SybDriver
[12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables,
dbURL:jdbc:sybase:Tds:11.32.25.98:2638?ServiceName =mgmtdb&JCONNECT_VERSION=4
[12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables, dbType:0
[12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables, dbUser:MW_DBA
[12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
modifyEnumTables, dbPasswd:novell
[12/2/04 12:37:06.844] Service Manager: start(ServiceDataAccessor, String[])
not found in
'com.novell.zenworks.desktop.inventory.selector.Se lectorServiceInit'
[12/2/04 12:37:07.046] ZENInv - Selector: Selector Services Started
Successfully
[12/2/04 12:37:07.046] Service Manager: start(ServiceDataAccessor, String[])
not found in
'com.novell.zenworks.desktop.inventory.storer.Stor erServiceInit'
[12/2/04 12:37:07.062] ZENInv - Selector: Selector Code Profiling disabled
[12/2/04 12:37:07.067] ZENInv - Storer: Start with
'ServiceStatusChangeListener' invoked
[12/2/04 12:37:17.010] ZENInv - Selector: Getting ServerConfig HashTable
[12/2/04 12:37:17.010] ZENInv - Selector: Getting InvServiceObj from
HashTable
[12/2/04 12:37:17.011] ZENInv - Selector: Getting NDSTree from ServiceObject
[12/2/04 12:37:17.013] ZENInv - Selector: NDSTree=null
[12/2/04 12:37:17.014] ZENInv - Selector: Getting InventoryServiceDN from
ServiceObject
[12/2/04 12:37:17.014] ZENInv - Selector:
InventoryServiceDN=CN=SERVER1_ZenInvService.OU=BMO .OU=UK.O=OURTREE
[12/2/04 12:37:17.014] ZENInv - Selector: Getting ScanDir from ServiceObject
[12/2/04 12:37:17.015] ZENInv - Selector: ScanDir=DATA:\Zenworks\ScanDir
[12/2/04 12:37:17.021] ZENInv - Selector: NEW SyncServiceTable Constructor
Invoked
[12/2/04 12:37:17.022] ZENInv - Selector: Creating-Verifying
Serialize-Deserialize Location DATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.022] ZENInv - Selector: Checking for
DATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.023] ZENInv - Selector: synchTableDir exists. Check wether
this is a directory or File
[12/2/04 12:37:17.024] ZENInv - Storer: Storer from dblocation user: MW_DBA
driver: com.sybase.jdbc.SybDriver protocol: jdbc:sybase: subname: Tds: port:
2638
[12/2/04 12:37:17.024] ZENInv - Selector: Directory
ExistsDATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.024] ZENInv - Selector: Directory Existence
ConfirmedDATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.025] ZENInv - Selector: Serialize-Deserialize File
DATA:\Zenworks\ScanDir\stable\STABLE.SER
[12/2/04 12:37:17.025] ZENInv - Selector: Initializing SyncServiceTable
[12/2/04 12:37:17.025] ZENInv - Selector: SynchTable Does not Exist
[12/2/04 12:37:17.026] ZENInv - Selector: Attempting to Load SynchTable From
Serialized File
[12/2/04 12:37:17.026] ZENInv - Selector: DeSerializing hashTable
FromDATA:\Zenworks\ScanDir\stable\STABLE.SER
[12/2/04 12:37:17.029] ZENInv - Storer: url[From DBObject]:
jdbc:sybase:Tds:11.32.25.98:2638?ServiceName=mgmtd b&JCONNECT_VERSION=4
[12/2/04 12:37:17.035] ZENInv - Storer: wsdn:
CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREEi = 0
[12/2/04 12:37:17.037] ZENInv - Selector: DeSerializing SyncService
HashTable
[12/2/04 12:37:17.041] ZENInv - Selector: SynchTable Loaded Sucessfully From
Serialized File
[12/2/04 12:37:17.042] ZENInv - Selector: Getting hasDatabase status from
ServiceObject
[12/2/04 12:37:17.042] ZENInv - Selector: hasDatabase is true from
ServiceObject
[12/2/04 12:37:17.042] ZENInv - Selector: Getting isStandAlone status from
ServiceObject
[12/2/04 12:37:17.043] ZENInv - Selector: isStandAlone is true from
ServiceObject
[12/2/04 12:37:17.043] ZENInv - Selector: ConvDir
DATA:\Zenworks\ScanDir\conv\
[12/2/04 12:37:17.044] ZENInv - Selector: ConvDir exists. Check wether this
is a directory or File
[12/2/04 12:37:17.044] ZENInv - Selector: DATA:\Zenworks\ScanDir
[12/2/04 12:37:17.045] ZENInv - Selector: DATA:\Zenworks\ScanDir\DbDir
[12/2/04 12:37:17.045] ZENInv - Selector:
[12/2/04 12:37:17.048] ZENInv - Selector: Getting SELECTOR_STORER Synch
Object
[12/2/04 12:37:17.049] ZENInv - Selector: Getting SELECTOR_COLLECTOR Synch
Object
[12/2/04 12:37:17.049] ZENInv - Selector: Getting SELECTOR_CONVERTER Synch
Object
[12/2/04 12:37:17.049] ZENInv - Selector: Getting CONVERTER_SELECTOR Synch
Object
[12/2/04 12:37:17.050] ZENInv - Selector: Getting SYNCHSERVICE_SELECTOR
Synch Object
[12/2/04 12:37:17.061] ZENInv - Storer: Loading Storer test properties file
[12/2/04 12:37:17.080] Service Manager: start(ServiceDataAccessor, String[])
not found in
'com.novell.zenworks.common.inventory.scancollecto r.ScanCollector'
[12/2/04 12:37:17.338] ZENInv - IFS Server: zenInvScanCollector:
FileServiceController: Startup Properties: {chunksize=4096,
lockfactory=com.novell.zenworks.common.inventory.i fs.utils.MemoryFileLockFac
tory, lockseed=ScanSelectorLock, transfers=100,
rootdirectory=DATA:\Zenworks\ScanDir, timeout=60000,
servicename=zenInvScanCollector, portnumber=0}
[12/2/04 12:37:17.499] Service Manager: start(ServiceDataAccessor, String[])
not found in
'com.novell.zenworks.desktop.inventory.InvSyncServ ice.ManagableSyncService'
[12/2/04 12:37:17.579] ZENInv - Inventory Sync Service: SyncService thread
started
[12/2/04 12:37:17.579] ZENInv - Inventory Sync Service: NEW SyncServiceTable
Constructor Invoked
[12/2/04 12:37:17.580] ZENInv - Inventory Sync Service: Creating-Verifying
Serialize-Deserialize Location DATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.580] ZENInv - Inventory Sync Service: Checking for
DATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.585] ZENInv - Inventory Sync Service: synchTableDir
exists. Check wether this is a directory or File
[12/2/04 12:37:17.585] ZENInv - Inventory Sync Service: Directory
ExistsDATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.586] ZENInv - Inventory Sync Service: Directory Existence
ConfirmedDATA:\Zenworks\ScanDir\stable\
[12/2/04 12:37:17.586] ZENInv - Inventory Sync Service:
Serialize-Deserialize File DATA:\Zenworks\ScanDir\stable\STABLE.SER
[12/2/04 12:37:17.587] ZENInv - Inventory Sync Service: Initializing
SyncServiceTable
[12/2/04 12:37:17.587] ZENInv - Inventory Sync Service: Will Use the
existing SyncServiceTable
[12/2/04 12:37:18.445] ZENInv - Status Reporting: Messages are written into
XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:18.484] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:18.520] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw s,
"Ty T, "T j, "T- l, " s, "
T, "6 j, "9Y l, "~; s, "s? T}
[12/2/04 12:37:18.525] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:18.526] ZENInv - Status Reporting: Number of records to add
are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:18.564] ZENInv - Status Reporting: getAttributesofDN:
Attributes of:
[12/2/04 12:37:18.595] ZENInv - Status Reporting:
getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw s,
"Ty T, "T j, "T- l, " s, "
T, "6 j, "9Y l, "~; s, "s? T}
[12/2/04 12:37:18.600] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:18.601] ZENInv - Status Reporting: Adding record 0 for
DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
[12/2/04 12:37:18.699] ZENInv - Status Reporting: Logout from NDS
returnedtrue
[12/2/04 12:37:18.783] ZENInv - Status Reporting: Logout from NDS
returnedtrue
Migration Log
Starting the AlterProcedure Log at 2004-12-02 11:45:29.731
Unzipping the file migrate.zip
Transforming the Migration SQL files...
...3
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Initial_0
is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
nPotsModem_1 is done with 545 0x03010c0a010202
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
nOperatingSystem_2 is done with 143 0x03010e02
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
nNetworkAdapter_3 is done with 544 0x03010c0b04
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
nDiskDrive_4 is done with 543 0x03010c080301
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Wi
nOperatingSystem_5 is done with 547 0x03010e0201
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\zenworks_vi
deoadapter_6 is done with 53 0x03010c030c0201
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Sy
stemInfo_7 is done with 228 0x03020404
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_so
undadapter_8 is done with 546 0x03010c16
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Mo
therboard_9 is done with 221 0x0302040101
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Lo
gicalDiskDrive_10 is done with 113 0x03010c110402
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_in
ventoryscanner_11 is done with 181 0x03011402
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_in
stalleddriver_12 is done with 548 0x2701
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_In
stalledDirectory_13 is done with 551 0x4a
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_bu
s_14 is done with 35 0x03010c0301
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_bi
os_15 is done with 180 0x0301140101
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
CurrentLoginUser_16 is done with 549 0x48
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
LastLoginUser_17 is done with 550 0x49
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
User_18 is done with 205 0x03011a
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\misc_19 is
copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybcustom_2
0 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybdelprocs
_21 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybextended
_22 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybrmaudit_
23 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\CIM_Process
or_24 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ne
twareOperatingSystem_24 is done with 142 0x03010e01
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\indytoprom_
26 is copied
\DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Post_30 is
copied
Starting Schema Migration
Adding the extra db spaces...
SELECT file_name FROM SYSFILE WHERE dbspace_name LIKE '%CIM9%'
has the results
File_NAME=mgmtdb9.db
dbfile's base directory =
SELECT file_name FROM SYSFILE WHERE dbspace_name LIKE '%CIM10%'
CREATE DBSPACE CIM10 AS 'mgmtdb10.db'
Totaltime taken for Schema Migration : 149
Migration Exception Message->Could not add the dbspace..ASA Error -121:
Permission denied: unable to create the dbspace file "CIM10",
SQLState->42501, ErrorCode->262I am getting identical errors with NW6.5 SP3, edir 8.7.3.6 upgrading from
ZFD4.01 to ZFD6.5. Haven't found any knowledgebase items on it.
Thanks,
Doug Streit
> Extra info: this is a NW6.5 SP2 server with edir 8.7.3.3, Sybase database.
>
> "SLaw" <[email protected]> wrote in message
> news:[email protected]...
> > Hi,
> >
> > Having upgraded from ZFD3.2 SP3 to ZFD4.01 and carefully following the
> > documentation instructions I still get the following error on the
> Inventory
> > Service startup screen:
> >
> > "The Upgrade Service is migrating the schema
> > An error has occurred. The upgrade service is exiting now."
> >
> > The Inv Srv object Server Status log says:
> >
> > "1178: An error occured while performing the DBSchema migration
> > 1177: Unable to complete all the operations, the upgrade service is
> exiting
> > with errors.
> > 106: The Storer succesfully connected to the database
> > 108:the database in not initialized as it is being upgraded"
> >
> > All of these errors on the support site have as a fix either restart the
> > Inventory Service or contact Novell.
> >
> > I've spent some hours now looking for a fix on Knowledgebase and in the
> Zen
> > forums but can't find anything useful. This is the second time I've run
> the
> > upgrade and this problem has occurred. I've reinstalled Zen3.2 from
> scratch,
> > applied SP1 and SP3.
> > I've checked policies and database objects, all appears as it should be.
> >
> >
> > These seem to be the relevant errors from the log files:
> >
> > Migration Log:
> > Migration Exception Message->Could not add the dbspace..ASA Error -121:
> > Permission denied: unable to create the dbspace file "CIM10",
> > SQLState->42501, ErrorCode->262
> >
> > Zenworksinvservice log file:
> > [12/2/04 12:37:04.691] ZenInv - Upgrade Service: migrateDbToP1(true)
> failed
> > in AlterProcedure
> > [12/2/04 12:37:05.286] ZenInv - Upgrade Service: Upgrade to ZfD4.0 or
> ZFS3.0
> > SP1 failed. Reinstall ZFD4.0 or ZFS3.0 SP1 and start the inventory server
> > again
> >
> > See full logs below.
> >
> > Is there no option but to go straight to Novell?
> >
> > Regards,
> >
> >
> > Steve Law
> >
> > zenworksinvservice-041202123653303.txt
> > *********************************
> > [12/2/04 12:36:53.280] ZENInv - Service Loader: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:36:53.349] ZENInv - Service Loader:
> > getAttributes:Attributes={zeninvhostserver=zeninvH ostServer: Distinguished
> > Name CN=SERVER1.OU=BMO.OU=UK.O=OURTREE, zeninvrole=zeninvRole: Integer 25}
> > [12/2/04 12:36:53.358] ZENInv - Service Loader: Logout from NDS
> returnedtrue
> > [12/2/04 12:36:53.725] ZENInv - Server Config: The severconfig service
> start
> > is pending...
> > [12/2/04 12:36:53.784] ZENInv - Server Config: getAttributesofDN:
> Attributes
> > of:
> > [12/2/04 12:36:53.827] ZENInv - Server Config:
> > getAttributes:Attributes={zeninvscanfilepath=zenin vScanFilePath: Case
> Ignore
> > String \\SERVER1\DATA\Zenworks\ScanDir, zeninvrole=zeninvRole: Integer 25,
> > zeninvcomponentstatus=zeninvComponentStatus: 4.0 12 0}
> > [12/2/04 12:36:53.832] ZENInv - Server Config: Logout from NDS
> returnedtrue
> > [12/2/04 12:36:53.838] ZENInv - Server Config: Getting Cascade INit time
> > [12/2/04 12:36:54.844] ZENInv - Server Config: Database Location Policy
> DN
> > is CN=Zen32 Server Package:General:ZENworks
> > Database.OU=Policies.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:36:54.887] ZENInv - Server Config: getAttributesofDN:
> Attributes
> > of:
> > [12/2/04 12:36:54.918] ZENInv - Server Config:
> > getAttributes:Attributes={zenlocdatabaseobject=zen locDatabaseObject:
> > Distinguished Name CN=SERVER1_invDatabase.OU=BMO.OU=UK.O=OURTREE}
> > [12/2/04 12:36:54.923] ZENInv - Server Config: Logout from NDS
> returnedtrue
> > [12/2/04 12:36:54.967] ZENInv - Server Config: getAttributesofDN:
> Attributes
> > of:
> > [12/2/04 12:36:55.001] ZENInv - Server Config:
> > getAttributes:Attributes={zendbpassword=zendbPassw ord: novell,
> > zendbjdbcprotocol=zendbJDBCProtocol: jdbc:,
> > zendbdatabasetype=zendbDatabaseType: Integer 0,
> > zendbjdbcsidservice=zendbJDBCSIDService: 11.32.25.98, zendburl=zendbURL:
> > Case Ignore String
> > jdbc:sybase:Tds:11.32.25.98:2638?ServiceName=mgmtd b&JCONNECT_VERSION=4,
> > zendbjdbcsubname=zendbJDBCSubName: Tds:, zendbjdbcdriver=zendbJDBCDriver:
> > com.sybase.jdbc.SybDriver, host server=Host Server: Distinguished Name
> > CN=SERVER1.OU=BMO.OU=UK.O=OURTREE, network address=Network Address:
> 13;20;,
> > zendbuser=zendbUser: MW_DBA, zendbjdbcsubprotocol=zendbJDBCSubProtocol:
> > sybase:, zendbjdbcport=zendbJDBCPort: 2638}
> > [12/2/04 12:36:55.013] ZENInv - Server Config: Logout from NDS
> returnedtrue
> > [12/2/04 12:36:56.297] ZENInv - Server Config: The severconfig service is
> > started...
> > [12/2/04 12:36:56.320] ZENInv - Sync Scheduler: The scheduler service
> start
> > is pending...
> > [12/2/04 12:36:56.382] ZENInv - Sync Scheduler: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:36:56.459] ZENInv - Sync Scheduler:
> >
> getAttributes:Attributes={zeninvsyncrepeatinterval =zeninvSyncRepeatInterval:
> > >
> > }
> > [12/2/04 12:36:56.463] ZENInv - Sync Scheduler: Logout from NDS
> returnedtrue
> > [12/2/04 12:36:56.585] ZENInv - Sync Scheduler: This is the schedule
> > fetchedRun Daily: Monday, Tuesday, Wednesday, Thursday, Friday From
> 00:00:00
> > To 22:55:00 (Randomly dispatch during time period)
> > [12/2/04 12:36:56.597] ZENInv - Sync Scheduler: fetchSchedule returned:
> true
> > [12/2/04 12:36:56.597] ZENInv - Sync Scheduler: Schedule Fired
> > [12/2/04 12:36:56.597] ZENInv - Sync Scheduler: The scheduler service is
> > started...
> > [12/2/04 12:36:56.721] ZenInv - Upgrade Service: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:36:56.752] ZenInv - Upgrade Service:
> > getAttributes:Attributes={zeninvcomponentstatus=ze ninvComponentStatus: 4.0
> > 12 0}
> > [12/2/04 12:36:56.756] ZenInv - Upgrade Service: Logout from NDS
> > returnedtrue
> > [12/2/04 12:36:56.758] ZenInv - Upgrade Service: Info : Read
> > zenInvComponentStatus = 4.0 12 0
> > [12/2/04 12:36:56.758] ZenInv - Upgrade Service: Inventory Server
> > zenInvComponentStatus = 4.0 12 0
> > [12/2/04 12:36:56.789] Service Manager: start(ServiceDataAccessor,
> String[])
> > not found in
> > 'com.novell.zenworks.desktop.inventory.strconverte r.STRConverterServiceIni
> t'
> > [12/2/04 12:36:56.815] ZENInv - STRConverter: Start with
> > 'ServiceStatusChangeListener' invoked
> > [12/2/04 12:37:04.691] ZenInv - Upgrade Service: migrateDbToP1(true)
> failed
> > in AlterProcedure
> > [12/2/04 12:37:04.807] ZENInv - Status Reporting: Messages are written
> into
> > XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:04.845] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:04.881] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-�O
> j,
> > "-�� l, "Tw� �s, "Ty� �T, "T�� j, "T�-
> > l, "�� �s, "� �T, "�6� j, "�9Y l}
> > [12/2/04 12:37:04.898] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:04.912] ZENInv - Status Reporting: Number of records to add
> > are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:04.951] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:04.984] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-�O
> j,
> > "-�� l, "Tw� �s, "Ty� �T, "T�� j, "T�-
> > l, "�� �s, "� �T, "�6� j, "�9Y l}
> > [12/2/04 12:37:04.990] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:04.991] ZENInv - Status Reporting: Adding record 0 for
> > DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.113] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.209] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.246] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:05.277] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-��
> l,
> > "Tw� �s, "Ty� �T, "T�� j, "T�- l, "��
> > �s, "� �T, "�6� j, "�9Y l, "�~; �s}
> > [12/2/04 12:37:05.284] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.285] ZENInv - Status Reporting: Number of modified
> records
> > are: 0 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.286] ZenInv - Upgrade Service: Upgrade to ZfD4.0 or
> ZFS3.0
> > SP1 failed. Reinstall ZFD4.0 or ZFS3.0 SP1 and start the inventory server
> > again
> > [12/2/04 12:37:05.288] ZENInv - Status Reporting: Messages are written
> into
> > XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.326] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:05.358] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-��
> l,
> > "Tw� �s, "Ty� �T, "T�� j, "T�- l, "��
> > �s, "� �T, "�6� j, "�9Y l, "�~; �s}
> > [12/2/04 12:37:05.364] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.365] ZENInv - Status Reporting: Number of records to add
> > are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.403] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:05.435] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "-��
> l,
> > "Tw� �s, "Ty� �T, "T�� j, "T�- l, "��
> > �s, "� �T, "�6� j, "�9Y l, "�~; �s}
> > [12/2/04 12:37:05.441] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.445] ZENInv - Status Reporting: Adding record 0 for
> > DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.568] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.670] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.707] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:05.738] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw�
> �s,
> > "Ty� �T, "T�� j, "T�- l, "�� �s, "�
> > �T, "�6� j, "�9Y l, "�~; �s, "�s? �T}
> > [12/2/04 12:37:05.747] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:05.748] ZENInv - Status Reporting: Number of modified
> records
> > are: 0 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables, Running EnumModifier in integrated mode with
> > upgradeService
> > [12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables, dbDriver:com.sybase.jdbc.SybDriver
> > [12/2/04 12:37:05.767] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables,
> >
> dbURL:jdbc:sybase:Tds:11.32.25.98:2638?ServiceName =mgmtdb&JCONNECT_VERSION=4
> > [12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables, dbType:0
> > [12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables, dbUser:MW_DBA
> > [12/2/04 12:37:05.768] ZenInv - Upgrade Service: [EnumModifier]
> > modifyEnumTables, dbPasswd:novell
> > [12/2/04 12:37:06.844] Service Manager: start(ServiceDataAccessor,
> String[])
> > not found in
> > 'com.novell.zenworks.desktop.inventory.selector.Se lectorServiceInit'
> > [12/2/04 12:37:07.046] ZENInv - Selector: Selector Services Started
> > Successfully
> > [12/2/04 12:37:07.046] Service Manager: start(ServiceDataAccessor,
> String[])
> > not found in
> > 'com.novell.zenworks.desktop.inventory.storer.Stor erServiceInit'
> > [12/2/04 12:37:07.062] ZENInv - Selector: Selector Code Profiling disabled
> > [12/2/04 12:37:07.067] ZENInv - Storer: Start with
> > 'ServiceStatusChangeListener' invoked
> > [12/2/04 12:37:17.010] ZENInv - Selector: Getting ServerConfig HashTable
> > [12/2/04 12:37:17.010] ZENInv - Selector: Getting InvServiceObj from
> > HashTable
> > [12/2/04 12:37:17.011] ZENInv - Selector: Getting NDSTree from
> ServiceObject
> > [12/2/04 12:37:17.013] ZENInv - Selector: NDSTree=null
> > [12/2/04 12:37:17.014] ZENInv - Selector: Getting InventoryServiceDN from
> > ServiceObject
> > [12/2/04 12:37:17.014] ZENInv - Selector:
> > InventoryServiceDN=CN=SERVER1_ZenInvService.OU=BMO .OU=UK.O=OURTREE
> > [12/2/04 12:37:17.014] ZENInv - Selector: Getting ScanDir from
> ServiceObject
> > [12/2/04 12:37:17.015] ZENInv - Selector: ScanDir=DATA:\Zenworks\ScanDir
> > [12/2/04 12:37:17.021] ZENInv - Selector: NEW SyncServiceTable Constructor
> > Invoked
> > [12/2/04 12:37:17.022] ZENInv - Selector: Creating-Verifying
> > Serialize-Deserialize Location DATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.022] ZENInv - Selector: Checking for
> > DATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.023] ZENInv - Selector: synchTableDir exists. Check
> wether
> > this is a directory or File
> > [12/2/04 12:37:17.024] ZENInv - Storer: Storer from dblocation user:
> MW_DBA
> > driver: com.sybase.jdbc.SybDriver protocol: jdbc:sybase: subname: Tds:
> port:
> > 2638
> > [12/2/04 12:37:17.024] ZENInv - Selector: Directory
> > ExistsDATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.024] ZENInv - Selector: Directory Existence
> > ConfirmedDATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.025] ZENInv - Selector: Serialize-Deserialize File
> > DATA:\Zenworks\ScanDir\stable\STABLE.SER
> > [12/2/04 12:37:17.025] ZENInv - Selector: Initializing SyncServiceTable
> > [12/2/04 12:37:17.025] ZENInv - Selector: SynchTable Does not Exist
> > [12/2/04 12:37:17.026] ZENInv - Selector: Attempting to Load SynchTable
> From
> > Serialized File
> > [12/2/04 12:37:17.026] ZENInv - Selector: DeSerializing hashTable
> > FromDATA:\Zenworks\ScanDir\stable\STABLE.SER
> > [12/2/04 12:37:17.029] ZENInv - Storer: url[From DBObject]:
> > jdbc:sybase:Tds:11.32.25.98:2638?ServiceName=mgmtd b&JCONNECT_VERSION=4
> > [12/2/04 12:37:17.035] ZENInv - Storer: wsdn:
> > CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREEi = 0
> > [12/2/04 12:37:17.037] ZENInv - Selector: DeSerializing SyncService
> > HashTable
> > [12/2/04 12:37:17.041] ZENInv - Selector: SynchTable Loaded Sucessfully
> From
> > Serialized File
> > [12/2/04 12:37:17.042] ZENInv - Selector: Getting hasDatabase status from
> > ServiceObject
> > [12/2/04 12:37:17.042] ZENInv - Selector: hasDatabase is true from
> > ServiceObject
> > [12/2/04 12:37:17.042] ZENInv - Selector: Getting isStandAlone status from
> > ServiceObject
> > [12/2/04 12:37:17.043] ZENInv - Selector: isStandAlone is true from
> > ServiceObject
> > [12/2/04 12:37:17.043] ZENInv - Selector: ConvDir
> > DATA:\Zenworks\ScanDir\conv\
> > [12/2/04 12:37:17.044] ZENInv - Selector: ConvDir exists. Check wether
> this
> > is a directory or File
> > [12/2/04 12:37:17.044] ZENInv - Selector: DATA:\Zenworks\ScanDir
> > [12/2/04 12:37:17.045] ZENInv - Selector: DATA:\Zenworks\ScanDir\DbDir
> > [12/2/04 12:37:17.045] ZENInv - Selector:
> > [12/2/04 12:37:17.048] ZENInv - Selector: Getting SELECTOR_STORER Synch
> > Object
> > [12/2/04 12:37:17.049] ZENInv - Selector: Getting SELECTOR_COLLECTOR Synch
> > Object
> > [12/2/04 12:37:17.049] ZENInv - Selector: Getting SELECTOR_CONVERTER Synch
> > Object
> > [12/2/04 12:37:17.049] ZENInv - Selector: Getting CONVERTER_SELECTOR Synch
> > Object
> > [12/2/04 12:37:17.050] ZENInv - Selector: Getting SYNCHSERVICE_SELECTOR
> > Synch Object
> > [12/2/04 12:37:17.061] ZENInv - Storer: Loading Storer test properties
> file
> > [12/2/04 12:37:17.080] Service Manager: start(ServiceDataAccessor,
> String[])
> > not found in
> > 'com.novell.zenworks.common.inventory.scancollecto r.ScanCollector'
> > [12/2/04 12:37:17.338] ZENInv - IFS Server: zenInvScanCollector:
> > FileServiceController: Startup Properties: {chunksize=4096,
> >
> lockfactory=com.novell.zenworks.common.inventory.i fs.utils.MemoryFileLockFac
> > tory, lockseed=ScanSelectorLock, transfers=100,
> > rootdirectory=DATA:\Zenworks\ScanDir, timeout=60000,
> > servicename=zenInvScanCollector, portnumber=0}
> > [12/2/04 12:37:17.499] Service Manager: start(ServiceDataAccessor,
> String[])
> > not found in
> >
> 'com.novell.zenworks.desktop.inventory.InvSyncServ ice.ManagableSyncService'
> > [12/2/04 12:37:17.579] ZENInv - Inventory Sync Service: SyncService thread
> > started
> > [12/2/04 12:37:17.579] ZENInv - Inventory Sync Service: NEW
> SyncServiceTable
> > Constructor Invoked
> > [12/2/04 12:37:17.580] ZENInv - Inventory Sync Service: Creating-Verifying
> > Serialize-Deserialize Location DATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.580] ZENInv - Inventory Sync Service: Checking for
> > DATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.585] ZENInv - Inventory Sync Service: synchTableDir
> > exists. Check wether this is a directory or File
> > [12/2/04 12:37:17.585] ZENInv - Inventory Sync Service: Directory
> > ExistsDATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.586] ZENInv - Inventory Sync Service: Directory
> Existence
> > ConfirmedDATA:\Zenworks\ScanDir\stable\
> > [12/2/04 12:37:17.586] ZENInv - Inventory Sync Service:
> > Serialize-Deserialize File DATA:\Zenworks\ScanDir\stable\STABLE.SER
> > [12/2/04 12:37:17.587] ZENInv - Inventory Sync Service: Initializing
> > SyncServiceTable
> > [12/2/04 12:37:17.587] ZENInv - Inventory Sync Service: Will Use the
> > existing SyncServiceTable
> > [12/2/04 12:37:18.445] ZENInv - Status Reporting: Messages are written
> into
> > XML file for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:18.484] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:18.520] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw�
> �s,
> > "Ty� �T, "T�� j, "T�- l, "�� �s, "�
> > �T, "�6� j, "�9Y l, "�~; �s, "�s? �T}
> > [12/2/04 12:37:18.525] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:18.526] ZENInv - Status Reporting: Number of records to add
> > are: 1 for DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:18.564] ZENInv - Status Reporting: getAttributesofDN:
> > Attributes of:
> > [12/2/04 12:37:18.595] ZENInv - Status Reporting:
> > getAttributes:Attributes={zeninvserverlog=zeninvSe rverLog: "Tw�
> �s,
> > "Ty� �T, "T�� j, "T�- l, "�� �s, "�
> > �T, "�6� j, "�9Y l, "�~; �s, "�s? �T}
> > [12/2/04 12:37:18.600] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:18.601] ZENInv - Status Reporting: Adding record 0 for
> > DN=CN=SERVER1_ZenInvService.OU=BMO.OU=UK.O=OURTREE
> > [12/2/04 12:37:18.699] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> > [12/2/04 12:37:18.783] ZENInv - Status Reporting: Logout from NDS
> > returnedtrue
> >
> > Migration Log
> > ***********
> > Starting the AlterProcedure Log at 2004-12-02 11:45:29.731
> > Unzipping the file migrate.zip
> > Transforming the Migration SQL files...
> > ..3
> > \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Initial_0
> > is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
> > nPotsModem_1 is done with 545 0x03010c0a010202
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
> > nOperatingSystem_2 is done with 143 0x03010e02
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
> > nNetworkAdapter_3 is done with 544 0x03010c0b04
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ze
> > nDiskDrive_4 is done with 543 0x03010c080301
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Wi
> > nOperatingSystem_5 is done with 547 0x03010e0201
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\zenworks_vi
> > deoadapter_6 is done with 53 0x03010c030c0201
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Sy
> > stemInfo_7 is done with 228 0x03020404
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_so
> > undadapter_8 is done with 546 0x03010c16
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Mo
> > therboard_9 is done with 221 0x0302040101
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Lo
> > gicalDiskDrive_10 is done with 113 0x03010c110402
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_in
> > ventoryscanner_11 is done with 181 0x03011402
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_in
> > stalleddriver_12 is done with 548 0x2701
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_In
> > stalledDirectory_13 is done with 551 0x4a
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_bu
> > s_14 is done with 35 0x03010c0301
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_bi
> > os_15 is done with 180 0x0301140101
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
> > CurrentLoginUser_16 is done with 549 0x48
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
> > LastLoginUser_17 is done with 550 0x49
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\ManageWise_
> > User_18 is done with 205 0x03011a
> > \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\misc_19
> is
> > copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybcustom_2
> > 0 is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybdelprocs
> > _21 is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybextended
> > _22 is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\sybrmaudit_
> > 23 is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\CIM_Process
> > or_24 is copied
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Zenworks_Ne
> > twareOperatingSystem_24 is done with 142 0x03010e01
> >
> \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\indytoprom_
> > 26 is copied
> > \DATA:\zenworks\Inv\Server\wminv\properties\migrat e\sybase\input\Post_30
> is
> > copied
> > Starting Schema Migration
> > Adding the extra db spaces...
> > SELECT file_name FROM SYSFILE WHERE dbspace_name LIKE '%CIM9%'
> > has the results
> > File_NAME=mgmtdb9.db
> > dbfile's base directory =
> > SELECT file_name FROM SYSFILE WHERE dbspace_name LIKE '%CIM10%'
> > CREATE DBSPACE CIM10 AS 'mgmtdb10.db'
> > Totaltime taken for Schema Migration : 149
> > Migration Exception Message->Could not add the dbspace..ASA Error -121:
> > Permission denied: unable to create the dbspace file "CIM10",
> > SQLState->42501, ErrorCode->262
> >
> >
>
> -
0PA_C01 still old flow why, what is recommended practice
Hi Experts,
We are at the juncture to create a new flow for 0PA_C01 (Cube and objects below it).
Even with BW 735 Content I see old dataflow means Infosources, Transfer and Update rules. Does anyone have implemented 7.X flow or Migrated to 7.x from 3.X. if so can you please explain what procedure you followed. (I am not asking BI content installation procedure rather which approach you have taken - Install 7.x version directly or 3.x migration).
Seems we dont have any other option than to install 3.x from latest BW 735 content, if this is the case, did somone try to convert the flow entirely from 3.x to 7.x, means eliminating infosources from the picture and replace with transformations directly from Datasources.
Appreciate your time
0HR_PA_0 - 0HR_PA_1 - 0PA_C01 3.x vs 7.x version - best practice
Edited by: curious maven on Feb 20, 2012 1:46 PMWe did install business content as it is i.e. 3.x flow with Infosources, Transfer and Update rules. Here make sure that 'Start' routine and all routines in transfer rules/Update rules installed properly.
(The standard routines reads master data of 0EMPLOYEE, to populate fields like Gender, Nationality, Language,Age in Years,Country key,County Code,Region (State, Province, County),Postal Code etc..)
Now with this 3.x data model, do data validation. Once you are okay with data, then start migrating model to 7.x.
We have below flow.
Datasource (migrated version) --> Transformation --> Infosource --> Transformation (manage start routine and other routines and read master data) --> Cube
Since these two datasources does not support delta, data loads should be daily delete and full load. -
hi all,
need information on data migration and possible methods of LSMW
Thanks
Swapnahi
Can a give a Search on the Topic "Data Migration" & "LSMW" in the forum for valuable information,
<b>Pls Do NOT Post UR Mail-Id, Lets all follow some Rules</b>
Data Migration Life Cycle
Overview : Data Migration Life Cycle
Data Migration
This document aims to outline the typical processes involved in a data migration.
Data migration is the moving and transforming of data from legacy to target database systems. This includes one to one and one to many mapping and movement of static and transactional data. Migration also relates to the physical extraction and transmission of data between legacy and target hardware platforms.
ISO 9001 / TickIT accredited
The fundamental aims of certification are quality achievement and improvement and the delivery of customer satisfaction.
The ISO and TickIT Standards are adhered to throughout all stages of the migration process.
Customer Requirements
Dependencies
Analysis
Iterations
Data Cleanse
Post Implementation
Proposal
Project Management
Development
Quality Assurance
Implementation
Customer Requirements
The first stage is the contact from the customer asking us to tender for a data migration project. The invitation to tender will typically include the Scope /
Requirements and Business Rules:
 Legacy and Target - Databases / Hardware / Software
 Timeframes - Start and Finish
 Milestones
 Location
 Data Volumes
Dependencies
Environmental Dependencies
 Connectivity - remote or on-site
 Development and Testing Infrastructure - hardware, software, databases, applications and desktop configuration
Support Dependencies
 Training (legacy & target applications) - particularly for an in-house test team
 Business Analysts -provide expert knowledge on both legacy and target systems
 Operations - Hardware / Software / Database Analysts - facilitate system housekeeping when necessary
 Business Contacts
 User Acceptance Testers - chosen by the business
 Business Support for data cleanse
Data Dependencies
 Translation Tables - translates legacy parameters to target parameters
 Static Data / Parameters / Seed Data (target parameters)
 Business Rules - migration selection criteria (e.g. number of months history)
 Entity Relationship Diagrams / Transfer Dataset / Schemas (legacy & target)
 Sign Off / User Acceptance criteria - within agreed tolerance limits
 Data Dictionary
Analysis
Gap Analysis
Identifying where differences in the functionalities of the target system and legacy system mean that data may be left behind or alternatively generating default data for the new system where nothing comparable exists on legacy.
Liaison with the business is vital in this phase as mission critical data cannot be allowed to be left behind, it is usual to consult with the relevant business process leader or Subject Matter Expert (SME). Often it is the case that this process ends up as a compromise between:
 Pulling the necessary data out of the legacy system to meet the new systems functionality
 Pushing certain data into the new system from legacy to enable the continuity of certain ad hoc or custom in-house processes to continue.
Data mapping
This is the process of mapping data from the legacy to target database schemas taking into account any reformatting needed. This would normally include the derivation of translation tables used to transform parametric data. It may be the case at this point that the seed data, or static data, for the new system needs generating and here again tight integration and consultation with the business is a must.
Translation Tables
Mapping Legacy Parameters to Target Parameters
Specifications
These designs are produced to enable the developer to create the Extract, Transform and Load (ETL) modules. The output from the gap analysis and data mapping are used to drive the design process. Any constraints imposed by platforms, operating systems, programming languages, timescales etc should be referenced at this stage, as should any dependencies that this module will have on other such modules in the migration as a whole; failure to do this may result in the specifications being flawed.
There are generally two forms of migration specification: Functional (e.g. Premise migration strategy) Detailed Design (e.g. Premise data mapping document)
Built into the migration process at the specification level are steps to reconcile the migrated data at predetermined points during the migration. These checks verify that no data has been lost or gained during each step of an iteration and enable any anomalies to be spotted early and their cause ascertained with minimal loss of time.
Usually written independently from the migration, the specifications for the reconciliation programs used to validate the end-to-end migration process are designed once the target data has been mapped and is more or less static. These routines count like-for-like entities on the legacy system and target system and ensure that the correct volumes of data from legacy have migrated successfully to the target and thus build business confidence.
Iterations
These are the execution of the migration process, which may or may not include new cuts of legacy data.
These facilitate:
 Collation of migration process timings (extraction, transmission, transformation and load).
 The refinement of the migration code i.e. increase data volume and decrease exceptions through:
 Continual identification of data cleanse issues
 Confirmation of parameter settings and parameter translations
 Identification of any migration merge issues
 Reconciliation
From our experience the majority of the data will conform to the migration rules and as such take a minimal effort to migrate ("80/20 rule"). The remaining data, however, is often highly complex with many anomalies and deviations and so will take up the majority of the development time.
Data Cuts
 Extracts of data taken from the legacy and target systems. This can be a complex task where the migration is from multiple legacy systems and it is important that the data is synchronised across all systems at the time the cuts are taken (e.g. end of day processes complete).
 Subsets / selective cuts - Depending upon business rules and migration strategy the extracted data may need to be split before transfer.
Freeze
Prior to any iteration, Parameters, Translation Tables and Code should be frozen to provide a stable platform for the iteration.
Data Cleanse
This activity is required to ensure that legacy system data conforms to the rules of data migration. The activities include manual or automatic updates to legacy data. This is an ongoing activity, as while the legacy systems are active there is the potential to reintroduce data cleanse issues.
Identified by
Data Mapping
Eyeballing
Reconciliation
File Integrities
Common Areas
 Address Formats
 Titles (e.g. mrs, Mrs, MRS, first name)
 Invalid characters
 Duplicate Data
 Free Format to parameter field
Cleansing Strategy
 Legacy - Pre Migration
 During migration (not advised as this makes reconciliation very difficult)
 Target - Post Migration (either manual or via data fix)
 Ad Hoc Reporting - Ongoing
Post Implementation
Support
For an agreed period after implementation certain key members of the migration team will be available to the business to support them in the first stages of using the new system. Typically this will involve analysis of any irregularities that may have arisen through dirty data or otherwise and where necessary writing data fixes for them.
Post Implementation fixes
Post Implementation Data Fixes are programs that are executed post migration to fix data that was either migrated in an 'unclean' state or migrated with known errors. These will typically take the form of SQL scripts.
Proposal
This is a response to the invitation to tender, which comprises the following:
Migration Strategy
 Migration development models are based on an iterative approach.
 Multiple Legacy / Targets - any migration may transform data from one or more legacy databases to one or more targets
 Scope - Redwood definition / understanding of customer requirements, inclusions and exclusions
The data may be migrated in several ways, depending on data volumes and timescales:
 All at once (big bang)
 In logical blocks (chunking, e.g. by franchise)
 Pilot - A pre-test or trial run for the purpose of proving the migration process, live applications and business processes before implementing on a larger scale.
 Catch Up - To minimise downtime only business critical data is migrated, leaving historical data to be migrated at a later stage.
 Post Migration / Parallel Runs - Both pre and post migration systems remain active and are compared after a period of time to ensure the new systems are working as expected.
Milestones can include:
 Completion of specifications / mappings
 Successful 1st iteration
 Completion of an agreed number of iterations
 Delivery to User Acceptance Testing team
 Successful Dress Rehearsal
 Go Live
Roles and Responsibilities
Data Migration Project Manager/Team Lead is responsible for:
 Redwood Systems Limited project management
 Change Control
 Solution Design
 Quality
 Reporting
 Issues Management
Data Migration Analyst is responsible for:
 Gap Analysis
 Data Analysis & Mapping
 Data migration program specifications
 Extraction software design
 Exception reporting software design
Data Migration Developers are responsible for:
 Migration
 Integrity
 Reconciliation (note these are independently developed)
 Migration Execution and Control
Testers/Quality Assurance team is responsible for:
 Test approach
 Test scripts
 Test cases
 Integrity software design
 Reconciliation software design
OtherRoles:
Operational and Database Administration support for source/target systems.
Parameter Definition and Parameter Translation team
Legacy system Business Analysts
Target system Business Analysts
Data Cleansing Team
Testing Team
Project Management
Project Plan
 Milestones and Timescales
 Resources
 Individual Roles and Responsibilities
 Contingency
Communication
It is important to have good communication channels with the project manager and business analysts. Important considerations include the need to agree the location, method and format for regular meetings/contact to discuss progress, resources and communicate any problems or incidents, which may impact the ability of others to perform their duty. These could take the form of weekly conference calls, progress reports or attending on site
project meetings.
Change Control
 Scope Change Requests - a stringent change control mechanism needs to be in place to handle any deviations and creeping scope from the original project requirements.
 Version Control - all documents and code shall be version controlled.
Issue Management
 Internal issue management- as a result of Gap analysis, Data Mapping, Iterations Output (i.e. reconciliation and file integrity or as a result of eyeballing)
 External issue management - Load to Target problems and as a result of User Acceptance Testing
 Mechanism - examples:
 Test Director
 Bugzilla
 Excel
 Access
 TracNotes
Development
Extracts / Loads
 Depending on the migration strategy, extract routines shall be written to derive the legacy data required
 Transfer data from Legacy and/or Target to interim migration environment via FTP, Tape, CSV, D/B object copy, ODBC, API
 Transfer data from interim migration environment to target
Migration (transform)
There are a number of potential approaches to a Data Migration:
 Use a middleware tool (e.g. ETI, Powermart). This extracts data from the legacy system, manipulates it and pushes it to the target system. These "4th Generation" approaches are less flexible and often less efficient than bespoke coding, resulting in longer migrations and less control over the data migrated.
 The Data Migration processes are individually coded to be run on a source, an interim or target platform. The data is extracted from the legacy platform to the interim / target platform, where the code is used to manipulate the legacy data into the target system format. The great advantage of this approach is that it can encompass any migration manipulation that may be required in the most efficient, effective way and retain the utmost control. Where there is critical / sensitive data migrated this approach is desirable.
 Use a target system 'File Load Utility', if one exists. This usually requires the use of one of the above processes to populate a pre-defined Target Database. A load and validate facility will then push valid data to the target system.
 Use an application's data conversion/upgrade facility, where available.
Reconciliation
Independent end-to-end comparisons of data content to create the necessary level of business confidence
 Bespoke code is written to extract required total figures for each of the areas from the legacy, interim and target databases. These figures will be totalled and broken down into business areas and segments that are of relevant interest, so that they can be compared to each other. Where differences do occur, investigation will then instruct us to alter the migration code or if there are reasonable mitigating factors.
 Spreadsheets are created to report figures to all levels of management to verify that the process is working and build confidence in the process.
Referential File Integrities
Depending on the constraints of the interim/target database, data may be checked to ascertain and validate its quality. There may be certain categories of dirty data that should be disallowed e.g. duplicate data, null values, data that does not match to a parameter table or an incompatible combination of data in separate fields as proscribed by the analyst. Scripts are written that run automatically after each iteration of the migration. A report is then generated to itemise the non-compatible data.
Quality Assurance
Reconciliation
 Horizontal reconciliation (number on legacy = number on interim = number on target) and Vertical reconciliation (categorisation counts (i.e. Address counts by region = total addresses) and across systems).
 Figures at all stages (legacy, interim, target) to provide checkpoints.
File Integrities
Scripts that identify and report the following for each table:
 Referential Integrity - check values against target master and parameter files.
 Data Constraints
 Duplicate Data
Translation Table Validation
Run after new cut of data or new version of translation tables, two stages:
 Verifies that all legacy data accounted for in "From" translation
 Verifies that all "To" translations exist in target parameter data
Eyeballing
Comparison of legacy and target applications
 Scenario Testing -Legacy to target system verification that data has been migrated correctly for certain customers chosen by the business who's circumstances fall into categories (e.g. inclusion and exclusion Business Rule categories, data volumes etc.)
 Regression Testing - testing known problem areas
 Spot Testing - a random spot check on migrated data
 Independent Team - the eyeballing is generally carried out by a dedicated testing team rather than the migration team
UAT
This is the customer based User Acceptance Test of the migrated data which will form part of the Customer Signoff
Implementation
Freeze
A code and parameter freeze occurs in the run up to the dress rehearsal. Any problems post freeze are run as post freeze fixes.
Dress Rehearsal
Dress rehearsals are intended to mobilise the resources that will be required to support a cutover in the production environment. The primary aim of a dress rehearsal is to identify the risks and issues associated with the implementation plan. It will execute all the steps necessary to execute a successful 'go live' migration.
Through the execution of a dress rehearsal all the go live checkpoints will be properly managed and executed and if required, the appropriate escalation routes taken.
Go Live window (typical migration)
 Legacy system 'end of business day' closedown
 Legacy system data extractions
 Legacy system data transmissions
 Readiness checks
 Migration Execution
 Reconciliation
 Integrity checking
 Transfer load to Target
 User Acceptance testing
 Reconciliation
 Acceptance and GO Live
===================
LSMW: Refer to the links below, can get useful info (Screen Shots for various different methods of LSMW)
Step-By-Step Guide for LSMW using ALE/IDOC Method (Screen Shots)
http://www.****************/Tutorials/LSMW/IDocMethod/IDocMethod1.htm
Using Bapi in LSMW (Screen Shots)
http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
Uploading Material Master data using BAPI method in LSMW (Screen Shots)
http://www.****************/Tutorials/LSMW/MMBAPI/Page1.htm
Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots)
http://www.****************/Tutorials/LSMW/Recording/Recording.htm
Uploading Material master data using recording method of LSMW(Screen Shots)
http://www.****************/Tutorials/LSMW/MMRecording/Page1.htm
Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots) Batch Input method
Uploading Material master data using Direct input method
http://www.****************/Tutorials/LSMW/MMDIM/page1.htm
Steps to copy LSMW from one client to another
http://www.****************/Tutorials/LSMW/CopyLSMW/CL.htm
Modifying BAPI to fit custom requirements in LSMW
http://www.****************/Tutorials/LSMW/BAPIModify/Main.htm
Using Routines and exception handling in LSMW
http://www.****************/Tutorials/LSMW/Routines/Page1.htm
Reward if USeful
Thanx & regrads
Naren -
PGI -Baych deficit qty-Posting period
Dear Mates,
i have following clarifications.Iam migration the datas for January2009 i this month.I have kept my period as current period as Febraury,2009 & previous period as January,2009. I used to get following error "Deficit Qty in BA in plant material XXXXX".But when i check the stock as on PGI date i have enough stock
Now i have changed the posting period as current period as January,2009 & Previous period as December,2008.Now iam facing with such error.
Could anybody please explain the logic behind this & how system behaves & check the stock during PGI so that in my earlier setting i used to get the error and when changed the posting period i am not getting the error.
Please explainBut when i check the stock as on PGI date i have enough stock
Where you have checked ?? MB5B ?? In MMBE you could have seen the stock is available but may be the stock would have been made in some other period.
thanks
G. Lakshmipathi -
Asynchronous process learnings: Why are callbacks skipped?
Just wanted to log my learnings about asynchronous processes and callbacks.
I was trying to write a .NET client to invoke an asynchronous process and was not getting anywhere. The process was being invoked successfully but the callback wasn't happening. The visual flow in the console kept saying "skipped callback x on partner y". It appears this happens when the process does not recognize that the incoming invocation is conversational - in other words it does not find the WS-Addressing headers or if present does not understand it. You would notice the same behaviour when you invoke an asynchronous process from the console. This is expected behaviour because the console does not expose a callback.
So, if you are trying to invoke an asynchronous process from another platform and see the "skipped callback" message, the first place to check is your WS-Addressing headers. Is it present and specified properly? Is the WS-Addressing namespace correct? On the BPEL 2.0.11 version I was testing, newer WS-Addressing versions would not work. I had to switch back to the older "http://schemas.xmlsoap.org/ws/2003/03/addressing" version. Another strange problem I noticed was that it was expecting some bpel attributes such as rootID, parentID in the MessageID header. Without these it would throw a server fault.
It also helps to write a sample client process in BPEL to invoke the asynchronous process and log the request and callback messages (there are technotes on OTN that show how to do this). This can be compared with the messages produced by your client to debug any problems.
Hope this information is useful for interested people. I'm yet to test the behaviour on the 10g version. Will keep this thread updated when I do that.The migration utility only processes transforms that are created by the user. If a transform was part of the Data Quality installation, it will not be migrated because the equivalent transform will be installed as part of the Data Services installation.
Please read the manual "DS Migration Considerations > Data Quality to Data Services Migration > How transforms migrate" for more details.
Regards,
George
Edited by: George Ruan on Nov 22, 2011 9:04 PM
Maybe you are looking for
-
Using Keynote with USB Vga connection
I'm trying a new setup for two projectors, where I'm hooking up one using a mini displayport on an imac, connecting it via DVI, and a second one through a USB VGA connection. Previously, we were using a VGA splitter off the displayport, which kept me
-
Re: Reg:invoice correction req
Hi All, when im trying to create an invoice corr req with ref to invoice system is not allowing me to change the condition values. as the material and the target qty are in display mode. plz help me out in solving this issue. its bit urgent. Thanks &
-
How to delete trailing spaces or invalid characters
Hi experts, I am loading data into a DSO. But in the activation it is showin error. Data is coming upto New data table. After that it is showing error as: ============================================= Value 'Eversafe ' (hex. '204576657273616665 ') of
-
ASM space increased after compression of tables
Hi all, I will have compressed some my huge tables in dataware house database , tables size are reduce after compression, while on ASM space has been increased. datbasebase is 10.2.0.4 (64 bit) and OS is AIX 5.3 (64 bit)
-
Exchange 2013 SP1 CU8 installation stuck
Good Morning, I'm just installing CU8 for Exchange 2013 on a DAG Member (only mailbox role installed), and setup is stuck at 49% of "Mailbox Role: Client Access service" for more than 2 hours now. The ExchangeSetup logfile says: [04/15/2015 06:44:20.