Creating a logical partition
Right now, I have 4 primary partition on my laptop. C, SYSTEM, RECVOVERY and HP-TOOLS.
I want to create an extended partition, so I deleted the HP-TOOLS partition.
The link below is to download the HP-TOOLS partition on my computer. (for BIOS updates)
Is it possible for me to install this in a logical partition without anything disfunctioning?
http://http://h10025.www1.hp.com/ewfrf/wc/softwareDownloadIndex?cc=us&lc=en&dlc=en&softwareitem=ob-8...
Hi,
Welcome to the HP Forum!
jasoncruz98 wrote:
Right now, I have 4 primary partition on my laptop. C, SYSTEM, RECVOVERY and HP-TOOLS.
I want to create an extended partition, so I deleted the HP-TOOLS partition.
The link below is to download the HP-TOOLS partition on my computer. (for BIOS updates)
Is it possible for me to install this in a logical partition without anything disfunctioning?
http://http://h10025.www1.hp.com/ewfrf/wc/softwareDownloadIndex?cc=us&lc=en&dlc=en&softwareitem=ob-8...
Actually you might have already removed the F11 functionality. Try it to see if it still will access the Recovery Manager and then back out of it. Why didn't you just create an logical partition in the C: partition? That would have been simpler.
Kind regards,
erico
****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
2015 Microsoft MVP - Windows Experience Consumer
Similar Messages
-
Cannot create another 2 logical partitions on another physical server
when i installed the BI 7.0 on AIX/DB2 9 platform. i can create 2 logical partition on the main server , yet i couldn't create another 2 logical parttions on the second server. the following is the error message
INFO 2008-02-21 03:49:03.490
"sapinst_dev.log" [Read only] 20411 lines, 744293 characters
TRACE 2008-02-21 03:51:28.513 [iaxxejsexp.cpp:199]
EJS_Installer::writeTraceToLogBook()
Found Error, error_codes[1] = <db2start dbpartitionnum 5 add dbpartitionnum hostname sapaix08 port 3 without tablespaces
SQL6073N Add Node operation failed. SQLCODE = "-1051".>
TRACE 2008-02-21 03:51:28.513 [iaxxejsexp.cpp:199]
EJS_Installer::writeTraceToLogBook()
During execution of <AddPart.sql>, <2> errors occured.
ERROR 2008-02-21 03:51:28.513 [iaxxinscbk.cpp:282]
abortInstallation
MDB-01999 Error occured, first error is: <SQL6073N Add Node operation failed. SQLCODE = "-1051".>
TRACE 2008-02-21 03:51:28.514 [iaxxejsbas.hpp:388]
handleException<ESAPinstException>()
Converting exception into JS Exception Exception.
ERROR 2008-02-21 03:51:28.515
CJSlibModule::writeError_impl()
MUT-03025 Caught ESAPinstException in Modulecall: ESAPinstException: error text undefined.
TRACE 2008-02-21 03:51:28.515 [iaxxejsbas.hpp:460]
EJS_Base::dispatchFunctionCall()
JS Callback has thrown unknown exception. Rethrowing.
ERROR 2008-02-21 03:51:28.516 [iaxxgenimp.cpp:731]
showDialog()
FCO-00011 The step AddDB6Partitions with step key |NW_DB6_DB_ADDPART|ind|ind|ind|ind|0|0|NW_DB6_AddPartitions|ind|ind|ind|ind|12|0|
AddDB6Partitions was executed with status ERROR .
TRACE 2008-02-21 03:51:28.539 [iaxxgenimp.cpp:719]
showDialog()
the following is my prerequisite for the installation
1. the user and group id and property is the same as the primary (server1)
2. the ssh trust relationship has built , i can ssh server1 from server2 or server2 from server1 with db2sid, sidadm users
3. i mount the /db2/db2sid , /db2/SID/db2dumps, /sapmnt/SID/exe on server2 as NFS
4. install the db2 software on /opt/IBM/db2/V9.1 (the location is the same as the primary's)
HI , DB2 experts. Could you give me some suggestion? thanks!Hi,Thomas,
Thanks for your help. the db2 database desn't use the autostoarge method and the relevant permission is the same as server1. i checked the db2dialog.log. the following is the detail information
"Storage path does not exist or is inaccessible" is the error message . i was wondering which storage path does not exit or inaccessible .
at the same time i have login on all /db2 with db2sid and run touch to test the permision . it sounds good. I don't know what happens , could you give me some suggestion ? thanks!
2008-02-21-08.10.56.442000-300 I14165596A287 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.10.56.444783-300 I14165884A302 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.10.56.450366-300 I14166187A403 LEVEL: Warning
PID : 712906 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.10.56.456345-300 I14166591A304 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.10.56.461462-300 I14166896A381 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.10.56.504322-300 I14167278A342 LEVEL: Severe
PID : 823374 TID : 1 PROC : db2acd 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.10.56.654959-300 E14167621A301 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.11.09.664000-300 I14167923A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.10.176098-300 I14168341A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.10.595702-300 I14168759A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.11.124888-300 I14169177A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.12.070605-300 I14169595A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.12.694723-300 I14170006A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.13.115940-300 I14170417A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.13.632046-300 I14170828A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.14.577056-300 I14171239A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.004794-300 I14171658A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.425920-300 I14172077A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.941622-300 I14172496A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.17.002107-300 I14172915A422 LEVEL: Event
PID : 639412 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 57 bytes
/db2/db2ab7/sqllib/adm/db2rstar db2profile SN ADDNODE 4 2
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0011 ....
2008-02-21-08.11.18.055723-300 E14173338A856 LEVEL: Warning
PID : 806940 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, license manager, sqllcRequestAccess, probe:1
MESSAGE : ADM12007E There are "80" day(s) left in the evaluation period for
the product "DB2 Enterprise Server Edition". For evaluation license
terms and conditions, refer to the IBM License Acceptance and License
Information document located in the license directory in the
installation path of this product. If you have licensed this product,
ensure the license key is properly registered. You can register the
license via the License Center or db2licm command line utility. The
license file can be obtained from your licensed product CD.
2008-02-21-08.11.18.296453-300 E14174195A1040 LEVEL: Event
PID : 806940 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StartMain, probe:911
MESSAGE : ADM7513W Database manager has started.
START : DB2 DBM
DATA #1 : Build Level, 152 bytes
Instance "db2ab7" uses "64" bits and DB2 code release "SQL09012"
with level identifier "01030107".
Informational tokens are "DB2 v9.1.0.2", "special_17253", "U810940_17253", Fix Pack "2".
DATA #2 : System Info, 224 bytes
System: AIX sapaix08 3 5 00CCD7FE4C00
CPU: total:8 online:8 Threading degree per core:2
Physical Memory(MB): total:7744 free:5866
Virtual Memory(MB): total:32832 free:30943
Swap Memory(MB): total:25088 free:25077
Kernel Params: msgMaxMessageSize:4194304 msgMaxQueueSize:4194304
shmMax:68719476736 shmMin:1 shmIDs:131072
shmSegments:68719476736 semIDs:131072 semNumPerID:65535
semOps:1024 semMaxVal:32767 semAdjustOnExit:16384
2008-02-21-08.11.19.312894-300 I14175236A428 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqleGetAutomaticStorageDetails, probe:111111
DATA #1 : <preformatted>
dataSize 752 pMemAlloc 1110cdac0 sizeof(struct sqleAutoStorageCfg) 16
2008-02-21-08.11.19.346560-300 I14175665A497 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 17 bytes
/db2/AB7/sapdata1
2008-02-21-08.11.19.349637-300 I14176163A619 LEVEL: Severe
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 46 bytes
Error during storage group file initialization
DATA #2 : Pointer, 8 bytes
0x0ffffffffffed006
DATA #3 : Pointer, 8 bytes
0x00000001110b3080
2008-02-21-08.11.19.355029-300 I14176783A435 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqleStartDb, probe:5
RETCODE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
2008-02-21-08.11.19.357831-300 I14177219A370 LEVEL: Warning
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqle_remap_errors, probe:100
MESSAGE : ZRC 0x800201a5 remapped to SQLCODE -1051
2008-02-21-08.11.19.374857-300 I14177590A336 LEVEL: Severe
PID : 803022 TID : 1 PROC : db2sysc 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, sqleSysCtrlAddNode, probe:6
MESSAGE : ADD NODE failed with SQLCODE -1051 MESSAGE TOKEN /db2/AB7/sapdata1 in module SQLECRED
2008-02-21-08.11.19.381604-300 I14177927A440 LEVEL: Event
PID : 639412 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 75 bytes
DB2NODE=4 DB2LPORT=2 /db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 4 2
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0024 ...$
2008-02-21-08.11.20.255191-300 I14178368A287 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.11.20.258575-300 I14178656A302 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.11.20.265164-300 I14178959A403 LEVEL: Warning
PID : 803022 TID : 1 PROC : db2sysc 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.11.20.271570-300 I14179363A304 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.11.20.276550-300 I14179668A381 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.11.20.312260-300 I14180050A342 LEVEL: Severe
PID : 774176 TID : 1 PROC : db2acd 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.11.20.474332-300 E14180393A301 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.11.20.600512-300 I14180695A422 LEVEL: Event
PID : 671870 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 57 bytes
/db2/db2ab7/sqllib/adm/db2rstar db2profile SN ADDNODE 5 3
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0011 ....
2008-02-21-08.11.21.620771-300 E14181118A856 LEVEL: Warning
PID : 819454 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, license manager, sqllcRequestAccess, probe:1
MESSAGE : ADM12007E There are "80" day(s) left in the evaluation period for
the product "DB2 Enterprise Server Edition". For evaluation license
terms and conditions, refer to the IBM License Acceptance and License
Information document located in the license directory in the
installation path of this product. If you have licensed this product,
ensure the license key is properly registered. You can register the
license via the License Center or db2licm command line utility. The
license file can be obtained from your licensed product CD.
2008-02-21-08.11.21.839933-300 E14181975A1040 LEVEL: Event
PID : 819454 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StartMain, probe:911
MESSAGE : ADM7513W Database manager has started.
START : DB2 DBM
DATA #1 : Build Level, 152 bytes
Instance "db2ab7" uses "64" bits and DB2 code release "SQL09012"
with level identifier "01030107".
Informational tokens are "DB2 v9.1.0.2", "special_17253", "U810940_17253", Fix Pack "2".
DATA #2 : System Info, 224 bytes
System: AIX sapaix08 3 5 00CCD7FE4C00
CPU: total:8 online:8 Threading degree per core:2
Physical Memory(MB): total:7744 free:5859
Virtual Memory(MB): total:32832 free:30936
Swap Memory(MB): total:25088 free:25077
Kernel Params: msgMaxMessageSize:4194304 msgMaxQueueSize:4194304
shmMax:68719476736 shmMin:1 shmIDs:131072
shmSegments:68719476736 semIDs:131072 semNumPerID:65535
semOps:1024 semMaxVal:32767 semAdjustOnExit:16384
2008-02-21-08.11.22.860106-300 I14183016A428 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqleGetAutomaticStorageDetails, probe:111111
DATA #1 : <preformatted>
dataSize 752 pMemAlloc 11099bac0 sizeof(struct sqleAutoStorageCfg) 16
2008-02-21-08.11.22.886670-300 I14183445A497 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 17 bytes
/db2/AB7/sapdata1
2008-02-21-08.11.22.889226-300 I14183943A619 LEVEL: Severe
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 46 bytes
Error during storage group file initialization
DATA #2 : Pointer, 8 bytes
0x0ffffffffffed006
DATA #3 : Pointer, 8 bytes
0x0000000110981080
2008-02-21-08.11.22.894826-300 I14184563A435 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqleStartDb, probe:5
RETCODE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
2008-02-21-08.11.22.897320-300 I14184999A370 LEVEL: Warning
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqle_remap_errors, probe:100
MESSAGE : ZRC 0x800201a5 remapped to SQLCODE -1051
2008-02-21-08.11.22.913142-300 I14185370A336 LEVEL: Severe
PID : 758092 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, sqleSysCtrlAddNode, probe:6
MESSAGE : ADD NODE failed with SQLCODE -1051 MESSAGE TOKEN /db2/AB7/sapdata1 in module SQLECRED
2008-02-21-08.11.22.918953-300 I14185707A440 LEVEL: Event
PID : 671870 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 75 bytes
DB2NODE=5 DB2LPORT=3 /db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 5 3
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0024 ...$
2008-02-21-08.11.23.793386-300 I14186148A287 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.11.23.796267-300 I14186436A302 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.11.23.802154-300 I14186739A403 LEVEL: Warning
PID : 758092 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.11.23.808100-300 I14187143A304 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.11.23.812951-300 I14187448A381 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.11.23.882148-300 I14187830A342 LEVEL: Severe
PID : 684418 TID : 1 PROC : db2acd 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.11.24.008936-300 E14188173A301 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.41.01.094426-300 I14188475A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block."
2008-02-21-08.41.01.109657-300 I14188847A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block."
2008-02-21-08.41.01.115152-300 I14189219A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block." -
Sql to create logical partitions
Oracle: 10.2.0.5
I am working with another group and they are pulling data from one of the databases I work on. They are using what they call 'logical partitions'. Basically it is a sql statement with a MOD function in the where clause.
select *
from table
where mod(field,10) = 0This allows them to divide the table up into 10 chunks. So they run 10 sessions to pull data across the network. They are using array processing(1000 records at a time) in a 3rd party tool to pull the data and write it to teradata. I have no ability to change this process to something else. They are not using a cursor. its just a fetch of 1000 recorsd at a time. I checked that first.
The MOD function forces a full table scan. Before I go and and add a bunch of function based indexes to support this, does anyone know of another way to write these sqls without having to have a function on the left side of the where clause and get it to use an index? I want an index in part because 10 sessions is too slow to pull the data in an acceptable time so I want to increase the number of sessions i can handle. We are pulling from a number of tables so if its all full table scans I am far more constrained on my side.
I am hoping there is a way to on the fly chunk a table with buckets or something and use an index. So I can ramp this up to say 20-30 sessions per table so each session gets 1/20 or 1/30 of the table.Guess2 wrote:
Oracle: 10.2.0.5
I am working with another group and they are pulling data from one of the databases I work on. They are using what they call 'logical partitions'. Basically it is a sql statement with a MOD function in the where clause.
select *
from table
where mod(field,10) = 0This allows them to divide the table up into 10 chunks. So they run 10 sessions to pull data across the network. They are using array processing(1000 records at a time) in a 3rd party tool to pull the data and write it to teradata. I have no ability to change this process to something else. They are not using a cursor. its just a fetch of 1000 recorsd at a time. I checked that first.
The MOD function forces a full table scan. Before I go and and add a bunch of function based indexes to support this, does anyone know of another way to write these sqls without having to have a function on the left side of the where clause and get it to use an index? I want an index in part because 10 sessions is too slow to pull the data in an acceptable time so I want to increase the number of sessions i can handle. We are pulling from a number of tables so if its all full table scans I am far more constrained on my side.
I am hoping there is a way to on the fly chunk a table with buckets or something and use an index. So I can ramp this up to say 20-30 sessions per table so each session gets 1/20 or 1/30 of the table.From the school of thought that if some is good, then more is better.
I suspect that the spindle upon which this table resides will be saturated
with I/O requests long before 20 is reached.
Session (CPU) is 100 - 1000 times faster than mechanical disk.
Sessions as few as a half dozen can overwhelm single disk drive. -
As made clear in the title:
I'm trying to create a Windows partition using Boot Camp. An error comes up telling me that I need to reformat my current partition(s) into one single partition. However, it's already formatted in the correct format, and is already a single partition.
My computer recently had a kernel panic, which apparently the corruption was in the system and needed to be erased and re-installed. I have a complete back-up using an external hard drive, and I am definitely not willing to do another one of those to reformat a partition that is already singular. I restarted the computer after ejecting my back-up, and after turning off time machine (thinking that boot camp was recognizing it as a secondary partition), however the error still occurs.
Is there any way to get around this?diskutil list:
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *750.2 GB disk0
1: EFI 209.7 MB disk0s1
2: Apple_HFS Macintosh HD 749.3 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: Windows7 *2.9 GB disk1
diskutil cs list:
No CoreStorage logical volume groups found
mount:
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk1 on /Volumes/Windows7 (udf, local, nodev, nosuid, read-only, noowners)
From my very basic knowledge - it still looks as if there is only one partition (not including the windows 7 CD necessary to install the windows partition). -
Use of Logical Partition in a Oracle Table...
What is the use of Logical Partition in a Oracle Table as Target. Techincal Manual does not say any significance.
My question is:
If the Table has no partitions and if we add Logical Partitions using Data Service, what purpose will it serve?
We are planning to load 30 Million records a day into a Oracle Table. As of now the Target table has no partition and we are planning to have that soon. Is there a better way to load the data into Target Table, using Partition, Bulk Loading(API), Degree of Parallelism, etc., We have not dealt data of that volume, inputs are highly appreciated.
Regards.
Santosh.Initial Value:
Indicator that NOT NULL is forced for this field
Use
Select this flag if a field to be inserted in the database is to be filled with initial values. The initial value used depends on the data type of the field.
Please note that fields in the database for which the this flag is not set can also be filled with initial values.
When you create a table, all fields of the table can be defined as NOT NULL and filled with an initial value. The same applies when converting the table. Only when new fields are added or inserted, are these filled with initial values. An exception is key fields. These are always filled automatically with initial values.
Restrictions and notes:
The initial value cannot be set for fields of data types LCHR, LRAW, and RAW. If the field length is greater than 32, the initial flag cannot be set for fields of data type NUMC.
If a new field is inserted in the table and the initial flag is set, the complete table is scanned on activation and an UPDATE is made to the new field. This can be very time-consuming.
If the initial flag is set for an included structure, this means that the attributes from the structure are transferred. That is, exactly those fields which are marked as initial in the definition have this attribute in the table as well.
hope it helps,
Saipriya -
We have a cube logical partitioned by sales org. We now would like to add additional sales org's to the list. Is this possible?? Currently with data in the cube it is blocked. Does anyone know steps to re-logical partition a cube?? With/or without data re-loading? Any options at the database level (oracle)?
Thanks!!Hi,
Yes that is correct you cant do cube partitioning with existing data.
Another important thing is based upon selection conditions only you can do the partitioning. There are two selection conditions they are as follows.
1. 0CALMONTH
2. 0FISCAL YEAR PERIOD
if these two time characteristics were not used in the cube means you can't partition the cube.
STEPS TO PARTITION THE INFOCUBE:
for example if you have Sales and Distribution cube you want to do logical partitioning on that cube.
we have to create infocube with same structure ( this we can do by giving the technical name of the cube in copy from option) at the time of creation of cube. Activate the cube and come back.
then you should create update rules for that cube.
select the first cube right click on it choose the option generate export datasource.
the system will finish generation process.
select the infosource option on the left side of the AWB.
in that we have DATAMART select it choose refresh icon.
you get the infosource with datasource assignment.
select the data source right click on it choose create infopackage.
it will take you to the maintain infopackage screen.
here you need to select the datatarget tab and select the target into which the data need to be updated.
schedule it and start.
now manage the data target and check the data has been updated or not.
after doing all this process go to infoprovider select the first cube delete the data.
now depending upon your requirement do the partitioning.
double click on the cube it will take you to the edit infocube screen from the menu bar select extras it will display the partitioning option select it.
then it will popup one small wizard showing the time characteristics based upon your requirement choose which ever you want check that box and continue the process.
Hope this will help you
Thanks and Regards
Vara Prasad -
Logical partitioning, pass-through layer, query pruning
Hi,
I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
2.Pass- though layer.
There are very few information about this basic concept. Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
Thanks,
Marcin1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
2.Pass- though layer.
There are very few information about this basic concept. Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
Usually a pass through layer is used to
1. Ensure data consistency
2. Possibly use Deltas
3. Additional transformations
In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
It depends more on the data volumes and also the number of transformation required...
If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO. -
Partition layout greyed out – not possible to create a third partition
Hi all,
Being in need of a new laptop, I bought myself a Macbook Pro retina, 13inch, 512gb SSD last weekend. I’ve been a windows user for almost 20 years now, with occasional exploits into Unix/Linux, but I’ve had it in mind to try out OS X for quite a while. I like the hardware and the design of the Macbook Pro and the fact that Boot camp promised to make dual booting into win7 a walk in the park, lead me to just try it out. However…
When I tried running boot camp (based on creating bootable USB’s):
ISO number 1 first didn’t get recognized by boot camp
Omitting the bootable USB creation by boot camp, downloading the drivers on a separate USB stick and using my own bootable USB lead boot camp to state that it didn’t recognize a windows install disk (even though the USB is bootable and does work)
Creating a full new ISO and using that one to create a bootable USB by boot camp worked, but gave an error during the copying of the files (can’t create USB, error during copying windows installation files)
Making a new image of that ISO within OS X to avoid compatibility issues gave the same error
Using another USB stick gave the same error
Unmounting the image still gave the same error
At that point I gave up on wanting to use boot camp and I decided to do the install myself. Going to disk utility to create new partitions triggered a new issue:
Via the “+”-sign, I can create a second partition on my drive. I can name that partition, format it any way I want and make it as big as I want.
After doing that, the “-“-sign is available to remove that partition again and to merge all back into one big HFS partition.
The tool doesn’t allow me to create 3 partitions though. The partition layout box is greyed out and fixed on “current”, and after the second partition is created, I can create a third one
Rebooting and doing disk utility via CMD+R gives the same issue.
I looked on the internet, found people with similar issues, but until now no real solution (unless a “boot via USB and partition from there”-solution, which I didn’t try yet).
Why doesn’t the system allow me to create 3 partitions?
What can I do to still create 3 partitions?
Is working via a bootable OS X USB really the only way to do this?user10274248 wrote:
So can anybody help how to create a volume group actually and how to create a logical volume under volume group.LVM is not recommended in Dom0, which is why support for LVM has been removed from the installer. LVM is not cluster aware and does not support write barriers, both of which are important for a filesystem that is used for storing file-backed disk images. -
0IC_C03 related Inventory Process - Logical Partitioning (Vs) Physical Part
Hello Everyone,
After going through multiple postings throughout the form and documentation from SAP, it states that the 0IC_C03 InfoCube when used with Non Cumulative keyfigures is not recommended to be partitioned logically by physical year/calendar year as the query will read all the data sets due to the stock marker logic.
In our specific scenario,
1. After the InfoCube (0IC_C03) was enhanced with additional characterisitcs such as Doc Number, Movement type and so on due to business requirements I was not able to actually use the Non Cumulative Keyfigures as they were not populated within the report.
2. So, we decided not to use the Non Cumulative keyfigures but rather create two cumulative keyfigures (Issue Stock Quantity - Receipt Stock Quantity) and (Issue Valuated Stock Value - Receipt Valuated Stock Value) and both of these are available in the InfoCube and are calculated during the update process.
These two keyfigures are cumulative with exception aggregation of LAST based on 0CALDAY.
The question is,
Since we are not using the actual Non Cumulative Keyfigures (even though we are not using these, we still have the stock marker updated and data compressed based on this along with Validity table defined) can we do logical partitioning on the InfoCube based on Calendar year.
Thanks....Hello Elango,
Appreciate your response.
First of..I do understand the difference between logical and physical partitioning and the question is not about joining them together.
I am sorry, if others cannot understand the detailed issue posted. My apologies was a part of polite gesture, and please do respond back with proper precise answer if you think you did actually understand the question....
The question here is about how I can leverage the performance and administrative performance by logically breakingdown the data.
The issues due to which I am trying to look into different aspects of logical partitioning are:
1. If I do logical partitioning by Plant due to the stock marker logic then I cannot do archiving as a Plant and its related data cannot be archived by time characteristic as the partitioning is not done by time characteristic.
2. The reason I would have to have document number and movement type in the InfoCube is due to the kind of reporting users perform.
We have a third party system whose data needs to be reconciled to the data in the plants and storage locations.
And in order to do so, the first step users would be running the report is plant, storage location and sku. From here on for the storage locations which have balance they would like to drill down on to the document number and movement type to see what the actual activity is.
So, to support this requirement I would have to have the above characterisitcs in the InfoCube.
The question again is,.....what is the exact list of issues I would be having doing the logical partitioning by time characteristic.
Once again, even though the non cumulative keyfigures are available in the InfoCube we are not using them for any reporting purpose....so please keep that in consideration while replying back.
Thanks
Dharma. -
Standalone report server not found on the network between logical partitions on AIX
Hello,
Here s our architecture:
forms/reports11gr2(patchset 1)
weblogic 10.3.6
on IBM AIX 7.1
Server JRE
Java(TM) SE Runtime Environment (build pap6460sr13ifix-20130303_02(SR13+IV37419) )
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr13-20130114_1
JRE -client - 1.6.0_27
We have 2 logical partitions separated in 2 different physical m/c where cluster of forms /reports is installed.
if i have a report service repsrv11g on one logical partition say, box 100 on Physical box 6000, the other logical partition box101's forms server on physical box 7000 is not able to look up the report service when calling from Forms using the Run_report_object.
Gives, FRM -41213 error.
If i just run the URL(use 2nd box) with http://101:8888/reports/rwservlet/showjobs?server=repsrv11g, it gives REP-51002: Bind to Reports Server repsrv11g failed
We thought/read that as long as they re on the same network / domain, report service is available.
Also did rwdiag.sh on one partition, its not able to find the other one.
Ran the test form which Oracle provides and it s also not able to find the report server on the network when run on the other lpar.
Temporarily, we created another report service on the other lpar but still using loadbalancing dns while doing web.show_document, so, it could potentially fail to bring up a report if load balancer redirects from one form's server to report server on the other partition.
Any thoughts would be greatly appreciated.
Thanks.Hello,
Any inputs on this pls? -
Physical Vs Logical Partitioning
We have 2 million records in the sales infocube for 3 years. We are currently discussing the pros and cons of using Logical partitioning Vs Physical Partitioning. Please give your inputs.
hi
there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
Logical partitioning - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
You would then create a Multi-Provider that allowed you to query all of them together.
A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, running at the same time. The system automatically merges the results from the 5 queries into a single result set.
So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing. In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. This is a separately licensed option in Oracle.
Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH for an InfoCube (ODSs can be partitioned, but require a DBAs involvement). The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic. It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
It is NOTsmart enough, however, to figure out that if your restrict to 0FISCYEAR = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
An InfoCube MUST be empty in order to physically partition it. At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
So you would need to know how the data will be queried, and the volume of data. It would make little sense to partition cubes that will not be very large.
physical partitioning is done at database level and logical partitioning done at data target level.
Cube partitioning with time characteristics 0calmonth Fiscper is physical partitioning.
Logical partitioning is u partition ur cube by year or month that is u divide the cube into different cubes and create a multiprovider on top of it.
logical Vs physical partitions ? -
Create a GPT partition table and format with a large volume (solved)
Hello,
I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/md126.
I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and 2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
Some information:
fdisk -l
Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xd9a6c0f5
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 104861695 52429824 83 Linux
/dev/sda2 104861696 466567167 180852736 83 Linux
/dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x5ffb31fc
Device Boot Start End Blocks Id System
/dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/md126p1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
gdisk -l on my RAID volume (/dev/md126):
GPT fdisk (gdisk) version 0.8.7
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/md126: 11720982528 sectors, 5.5 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 11720982494
Partitions will be aligned on 8-sector boundaries
Total free space is 6077 sectors (3.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 34 262177 128.0 MiB 0C01 Microsoft reserved part
2 264192 33032191 15.6 GiB 0700 Basic data partition
3 33032192 11720978431 5.4 TiB 0700 Basic data partition
To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
So if anyone has any suggestion that would help me out with this, I'd be glad to read.
Cheers
Last edited by Rolinh (2013-08-16 11:16:21)Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
Therefore, I used the mdadm utility to create my RAID 5:
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
# mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
It works like a charm. -
NW 7.3 specific - Database partitioning on top of logical partitioning
Hello folks,
In NW 7.3, I would like to know if it is possible to add a specific database partition rule on top of a logical partitioned cube. For example, if I have a LP cube by fiscal year - I would also like to specifically partition all generated cubes at DB level. I could not find any option in the GUI. In addition, each generated cube can be viewed only (cannot be changed in the GUI). Would anybody know if it is possible?
Thank you
IoanFair point! Let me explain more in details what I am looking for - in 7.0x, a cube can be partitioned at the DB level by fiscal period. Let's suppose my cube has only fiscal year 2011 data. If I partition the cube at the DB level by fiscal period in 12 buckets, I will get 12 distinct partitions (E table only) in the database. If the user runs a query on 06/2012, then the DB will search for the data only in the 06/2012 bucket - this is obviously faster than browsing entire cube (even with indexes).
In 7.3, cubes can be logical partitioned (LP). I created a LP by fiscal year - so far so good. Now, I would like to partition at the DB level each individual cube created by the LP. Right now I could not - this means that my fiscal year 2012 cube will have entire data residing in only 1 large partition, so a 06/2012 query will take longer (in theory).
So my question is --> "Is it possible to partition a cube generated by a LP in fiscal period buckets"? I believe the answers is no right now (Dec 2011).
By the way, all the above is true in a RDBMS environment - this is not a concern for BWA / HANA since data is column based and stored in RAM (not same technology as RDBMS).
I hope this clarifies by question
Thank you
Ioan -
Hello,
I need to split the CO-PA data into two info-cubes (logical partitioning by Record Type). I know that i can create few DataSources for COPA. Should i use the same DataSource or should i make 2 DataSources?
Please Advice,
David
Message was edited by: David CohnWhat datasources are you using now?
Cheers!
/smw -
LPAR - LOGICAL PARTITION QUESTION -
Hello SDN Experts.
LPAR (LOGICAL PARTITION QUESTION)
Our current Production Environment is running in Distributed Installation on
IBM System P5 570 Servers AIX ver 5.2, each node is running two Applications: SAP ERP 2005 SR1 (ABAP + JAVA) and CSS. (Customer Service System)
Node One
u2022 SAP Application (Central Instance, Central Services)
u2022 Oracle 9i Instance for CSS Application.
Node Two.
u2022 Oracle 10G Instance for SAP Application
u2022 CSS Application.
To improve performance we are planning to create a new LPAR for SAP.
According to the IBM HW Partner LPAR is logically isolated with different HW/SW resource(CPU/Memory /Disk resource, IP/hostname/mount point)...
Question:
I have this two possible solutions to copy SAP instances (app + db) to new LPAR, can I apply SCENARIO 2, which in my opinion is easier than SCENARIO 1.
SCENARIO 1.
In order to migrate application and database instances to the new LPAR do I need to follow the procedure explained in the guide:
(*) System Copy for SAP Systems Based on SAP NetWeaver 2004s SR1 ABAP+Java Document version: 1.1 ‒ 08/18/2006
SCENARIO 2.
After create all file systems (required in AIX) to copy data from Applications and Database Instances to their respective LPARs and change the ip address and hostnames in parameter files according to the following SAP Notes:
Note 8307 - Changing host name on R3 host
Note 403708 - Changing an IP address
Which is the best scenario SAP recommends in this case ?
Thanks for your comments.If your system is a combined ABAP + Java instance you can´t manually change the hostname. It´s not only those places that are listed in that note but much more, partially on filesystems in .properties files, partially in the database.
Doing that manually may work but since the process is not documented anywhere and since it depends on the applications running on top of the J2EE instance it´s not supported.
For ABAP + Java instances you must use the "sapinst-way" to get support in case of problems.
See note 757692 - Changing the hostname for J2EE Engine 6.40/7.0 installation
Markus
Maybe you are looking for
-
Can anyone please help with my time machine problem. I partitioned an external hard drive and use one partition for time machine and one for back up using Carbon Copy Cloner. The plan being if my hard drive goes down I get a new one reinstall from ba
-
What is the open purchage order
· Developed a report that displays open purchase order.
-
Sample reading at Header level (Preliminary step) arlready unsuccessful
Hi All, In condition technique I am getting following message u201CSample reading at Header level (Preliminary step) already unsuccessful . I can see this in one of the attribute of condition thru access trace. Also one of the field of access s
-
How stable is FCP 5 in comparison to Avid Xpress Pro?
I've been using Avid Xpress Pro on a brand new PC, and it hasn't been going well (problems with the preroll, etc.). I've been using a Sony DV camera as a deck to import the footage. Does FCP have any of these problems? It seems the Avid forum is rife
-
DVD simulates fine, no audio on menu after burn
This is a problem that has been driving me crazy, I've spent weeks authoring a DVD package consisting of three films, bonus features, and the soundtrack. I wasn't sure how to approach the soundtrack, I ended up making it up as a set of menus, so that