Info for table relationship for webcenter interaction database
Hi,
Does any one know where I can get information about the table relationship for webcenter database ? I appreciate your help.
Thanks
Yxu
Schema Spy is an amazing tool. I haven't used it for the WebCenter apps yet, but this will map everything out for you, providing the relationships you seek.
http://schemaspy.sourceforge.net/
Similar Messages
-
WSRP consumer for Webcenter Interaction
Hi ,
I have registered a Hello world Portlet(aspx Page Developed on .Net) on WSRP Producer and I am able to consume the portlet in Oracle Weblogic Portal 10.3.2.
I am doing a POC on "Oracle Webcenter Interaction" .
Now I want to consume the Helloworld portlet in Webcenter Interaction Portal.
Do I need to have Oracle WSRP Consumer to consume the Portlet on Webcenter Interaction Portal?
Is there any way to Add the Helloworld Portlet to Webcenter interaction Portal Page?
Thanks In Advance
Srinivas Kootikanti
Edited by: cnukvd100 on Oct 5, 2012 2:06 AMSchema Spy is an amazing tool. I haven't used it for the WebCenter apps yet, but this will map everything out for you, providing the relationships you seek.
http://schemaspy.sourceforge.net/ -
Any ideas for an interactive database interface?
Based on some user selected criteria, a VI gets a 2 dimensional array of strings from a database. What I would like to do is then associate a true/false value with each row in the 2D array which the user can check/uncheck to indicate they would like to get more information from the database related to that row of data. I haven't found any good ways to implement this. I would really like it to be a mouse click; not have the user type a string, like "yes" or "no".
Make a 1-D array of booleans (you can use a boolean control, or a check box control from the Dialog Controls pallette). Put the 1-D array of booleans and the 2-D array of strings in a cluster. Adjust the size of the boolean control so that the boolean rows line up with the string rows. The user can click one of the boolean controls for the row he or she is interested in. To read which row the user checked, unbundle the boolean array from the cluster, then find the index of the row using the Search 1D Array function. You can use the Event Structure to determine when the user has clicked on a boolean control.
-
Unable to retrieve nametab info for logic table BSEG during Database Export
Hi,
Our aim is to Migrate to New hardware and do the Database Export of the existing System(Unicode) and Import the same in the new Hardware
I am doing Database Export on SAP 4.7 SR1,HP-UX ,Oracle 9i(Unicode System) and during Database Export "Post Load Processing phase" got the error as mentioned in SAPCLUST.log
more SAPCLUST.log
/sapmnt/BIA/exe/R3load: START OF LOG: 20090216174944
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -ctf E /nas/biaexp2/DATA/SAPCLUST.STR /nas/biaexp2/DB/DDLORA.T
PL /SAPinst_DIR/SAPCLUST.TSK ORA -l /SAPinst_DIR/SAPCLUST.log
/sapmnt/BIA/exe/R3load: job completed
/sapmnt/BIA/exe/R3load: END OF LOG: 20090216174944
/sapmnt/BIA/exe/R3load: START OF LOG: 20090216182102
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
(GSI) INFO: dbname = "BIA20071101021156
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname = "tinsp041
(GSI) INFO: sysname = "HP-UX"
(GSI) INFO: nodename = "tinsp041"
(GSI) INFO: release = "B.11.11"
(GSI) INFO: version = "U"
(GSI) INFO: machine = "9000/800"
(GSI) INFO: instno = "0020293063"
(EXP) TABLE: "AABLG"
(EXP) TABLE: "CDCLS"
(EXP) TABLE: "CLU4"
(EXP) TABLE: "CLUTAB"
(EXP) TABLE: "CVEP1"
(EXP) TABLE: "CVEP2"
(EXP) TABLE: "CVER1"
(EXP) TABLE: "CVER2"
(EXP) TABLE: "CVER3"
(EXP) TABLE: "CVER4"
(EXP) TABLE: "CVER5"
(EXP) TABLE: "DOKCL"
(EXP) TABLE: "DSYO1"
(EXP) TABLE: "DSYO2"
(EXP) TABLE: "DSYO3"
(EXP) TABLE: "EDI30C"
(EXP) TABLE: "EDI40"
(EXP) TABLE: "EDIDOC"
(EXP) TABLE: "EPIDXB"
(EXP) TABLE: "EPIDXC"
(EXP) TABLE: "GLS2CLUS"
(EXP) TABLE: "IMPREDOC"
(EXP) TABLE: "KOCLU"
(EXP) TABLE: "PCDCLS"
(EXP) TABLE: "REGUC"
myCluster (55.16.Exp): 1557: inconsistent field count detected.
myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
myCluster: RFBLG *003**IN07**0001100000**2007*
myCluster (55.16.Exp): 318: error during conversion of cluster item.
myCluster (55.16.Exp): 319: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
(DB) INFO: disconnected from DB
/sapmnt/BIA/exe/R3load: job finished with 1 error(s)
/sapmnt/BIA/exe/R3load: END OF LOG: 20090216182145
/sapmnt/BIA/exe/R3load: START OF LOG: 20090217115935
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
(GSI) INFO: dbname = "BIA20071101021156
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname = "tinsp041
(GSI) INFO: sysname = "HP-UX"
(GSI) INFO: nodename = "tinsp041"
(GSI) INFO: release = "B.11.11"
(GSI) INFO: version = "U"
(GSI) INFO: machine = "9000/800"
(GSI) INFO: instno = "0020293063"
myCluster (55.16.Exp): 1557: inconsistent field count detected.
myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
myCluster: RFBLG *003**IN07**0001100000**2007*
myCluster (55.16.Exp): 318: error during conversion of cluster item.
myCluster (55.16.Exp): 319: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
(DB) INFO: disconnected from DB
SAPCLUST.l/sapmnt/BIA/exe/R3load: job finished with 1 error(s)
/sapmnt/BIA/exe/R3load: END OF LOG: 20090217115937
og (97%)
The main eror is "unable to retrieve nametab info for logic table BSEG "
Your reply to this issue is highly appreciated
Thanks
SunilHello,
acording to this output:
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
you are doing the export with a non-unicode SAP codepage. The codepage has to be 4102/4103 (see note #552464 for details). There is a screen in the sapinst dialogues that allows the change of the codepage. 1100 is the default in some sapinst versions.
Best Regards,
Michael -
Delete restrict for ABAP Dictionary database table
Hi,
I defined two database tables in ABAP dictionary, one with master data, and one with records referencing the master data.
I also defined a foreign key relationship in the second table, so that new entries in the second table are checked against the master data table.
In addition to this behaviour, I also want the Dicitionary to perform a check the other way round. In other words, if I try to delete a record in the master data table, this should not be possible if there are records in the second table referencing this record. Thats how foreign key relationships work in Oracle databases.
Is there a way to force this behaviour for ABAP Dictionary tables, too? Or is it possible to make the table maintenance view perform this check?
Thanks for your help!
Kind regards,
TobiasHello Tobias,
I can delete records in the master table which have dependent entries in the second table without an error or a warning.
How are you deleting the entries, via SM30?
If yes, you can use the [Event 03: Before Deleting the Display Data|http://help.sap.com/saphelp_nw04s/helpdata/en/91/ca9f14a9d111d1a5690000e82deaaa/content.htm]. In this TMG event you can check if the entry can be deleted at all!
If you're using Open SQL statements to delete the records, i don't think DB layer implicitly checks the dependency. You can always put an explicit check though
Btw, out-of-curiosity, is this a custom or standard table?
BR,
Suhas -
Unicode Export - unable to retrieve nametab info for logic table BSEG
Hi
We are performing a unicode export (CUUC from 4.6C upgrade to ECC 6.0) and we have incurred this error.
Without ORDER BY PRIMARY KEY the exported data may be unusable for some databases
Our OS is HPUX11.31 & Database is 10.2.0.2
myCluster (63.21.Exp): 1610: inconsistent settings for table position validity detected.
myCluster (63.21.Exp): 1611: nametab says table positions are valid.
myCluster (63.21.Exp): 1614: alternate nametab says table positions are not valid.
myCluster (63.21.Exp): 1617: for field 310 of nametab displacement is 1877, yet dbtabpos shows 1885.
myCluster (63.21.Exp): 1621: character length is 1 (in) resp. 2 (out).
myCluster (63.21.Exp): 1257: unable to retrieve nametab info for logic table BSEG .
myCluster (63.21.Exp): 8358: unable to acquire nametab info for logic table BSEG .
myCluster (63.21.Exp): 2949: failed to convert cluster data of cluster item.
myCluster: RFBLG *400**AT10**0000100000**2004*
myCluster (63.21.Exp): 322: error during conversion of cluster item.
myCluster (63.21.Exp): 323: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(DB) INFO: disconnected from DB
/usr/sap/SBX/SYS/exe/run/R3load: job finished with 1 error(s)
/usr/sap/SBX/SYS/exe/run/R3load: END OF LOG: 20081102104452
We checked the note 913783 as per the CUUC guide but the correction only for package SAPKB70004 to 6. but we are in package SAPKB70011.
We had found two notes:
1. Note 1238351 - Hom./Het.System Copy SAP NW 7.0 incl. Enhancement Package 1
:Solution:
There are two possible workarounds:
1. Modify DDL<dbs>.TPL (<dbs> = ADA, DB2, DB4, DB6, IND, MSS, ORA) BEFORE the R3load TSK files are generated;
search for the keyword "negdat:" and add "CLU4" and "VER_CLUSTR" to thisline.
2. Modify the TSK file (most probably SAPCLUST.TSK) BEFORE R3load import is(re-)started.
search for the lines starting with "D CLU4 I" and "D VER_CLUSTR I" and change the status (i.e. "err" or "xeq") to "ign" or remove the lines. "
I tried the above solution by editing the file DDL*.TPL but it is skipping the table and marks it as completed but its not the good solution as we will be miss the data from the table RFBLG.
2. Note 991401 - SYSCOPY EXPORT FAILS:SAPCLUST:ERROR: Code page conversion:
Solution
Activate the table.
Then call the RADCUCNT report. Do not change the selected parameters, but ensure that 'Overwrite Entries' is selected. Set the 'Unicode Length' to 2 and fill the last two fields 'Type' and 'Name' with TABL and TACOPAB respectively. Then select 'No Log' or specify a log name.
Execute the RADCUCNT report and restart the export.
We have not tried this solution, bcos SAP is still down and CDCLS job is still running.
We would like to know whether you have faced any issues like the above one and what is your suggested approach and solution.
Is it safe to start SAP now (when the CDCLS job runs) and then try to activate the table RFBLG?
Regards
Senthil
Edited by: J. Senthil Murugan on Nov 3, 2008 1:41 AM
Edited by: J. Senthil Murugan on Nov 3, 2008 3:36 AMHi Senthil,
If you have done your pre-conversion steps before upgrade and after upgrade successfully then you should not see the below errors. However changes to SPDD tables may some times also have some impact during conversion and throws nametab errors occurs. Program RADCUCNT runs in the end of upgrade to update nametab tables if any new changes happned during upgrade.
You can do any no of exports to complete jobs successfully.yeah When export running you shouldnt bring SAP up.
The tables you have mentioned all are cluster tables and CDCLS being the biggest table it will take hrs to complete depending on your size of the database.
Do not pay around with the .TSK file until if you are sure you want to re-execute it.Your first possiblility is skipped because there may be multiple same .TSK files present locally(where u r running distribution.monitor (or) sapinst ) and on the common directory. You may also look at the .TSK.bkp files because it get information and creates a new .TSK. This is not complicated but tricky.
secound possibility is to update the changed tables(eg: RFBLG...) to conversion tables.Follow the Note but make sure no R3load processes are running before you start SAP. If you dont want to wait long and sure to restart other processes which are running you can kill it and start SAP. Specify your error tables only and follow instructions given in the note.Once done bring down SAP app. and restart the export process using ur sapinst or distribution monitor.
Regards,
Vamshi. -
Unable to retrieve nametab info for logic table BSEG
Hi
We are performing a unicode export (CUUC from 4.6C upgrade to ECC 6.0) and we have incurred this error.
Without ORDER BY PRIMARY KEY the exported data may be unusable for some databases
Our OS is HPUX11.31 & Database is 10.2.0.2
myCluster (63.21.Exp): 1610: inconsistent settings for table position validity detected.
myCluster (63.21.Exp): 1611: nametab says table positions are valid.
myCluster (63.21.Exp): 1614: alternate nametab says table positions are not valid.
myCluster (63.21.Exp): 1617: for field 310 of nametab displacement is 1877, yet dbtabpos shows 1885.
myCluster (63.21.Exp): 1621: character length is 1 (in) resp. 2 (out).
myCluster (63.21.Exp): 1257: unable to retrieve nametab info for logic table BSEG .
myCluster (63.21.Exp): 8358: unable to acquire nametab info for logic table BSEG .
myCluster (63.21.Exp): 2949: failed to convert cluster data of cluster item.
myCluster: RFBLG *400**AT10**0000100000**2004*
myCluster (63.21.Exp): 322: error during conversion of cluster item.
myCluster (63.21.Exp): 323: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(DB) INFO: disconnected from DB
/usr/sap/SBX/SYS/exe/run/R3load: job finished with 1 error(s)
/usr/sap/SBX/SYS/exe/run/R3load: END OF LOG: 20081102104452
We checked the note 913783 as per the CUUC guide but the correction only for package SAPKB70004 to 6. but we are in package SAPKB70011.
We had found two notes:
1. Note 1238351 - Hom./Het.System Copy SAP NW 7.0 incl. Enhancement Package 1
:Solution:
There are two possible workarounds:
1. Modify DDL<dbs>.TPL (<dbs> = ADA, DB2, DB4, DB6, IND, MSS, ORA) BEFORE the R3load TSK files are generated;
search for the keyword "negdat:" and add "CLU4" and "VER_CLUSTR" to thisline.
2. Modify the TSK file (most probably SAPCLUST.TSK) BEFORE R3load import is(re-)started.
search for the lines starting with "D CLU4 I" and "D VER_CLUSTR I" and change the status (i.e. "err" or "xeq") to "ign" or remove the lines. "
I tried the above solution by editing the file DDL*.TPL but it is skipping the table and marks it as completed but its not the good solution as we will be miss the data from the table RFBLG.
2. Note 991401 - SYSCOPY EXPORT FAILS:SAPCLUST:ERROR: Code page conversion:
Solution
Activate the table.
Then call the RADCUCNT report. Do not change the selected parameters, but ensure that 'Overwrite Entries' is selected. Set the 'Unicode Length' to 2 and fill the last two fields 'Type' and 'Name' with TABL and TACOPAB respectively. Then select 'No Log' or specify a log name.
Execute the RADCUCNT report and restart the export.
We have not tried this solution, bcos SAP is still down and CDCLS job is still running.
We would like to know whether you have faced any issues like the above one and what is your suggested approach and solution.
Is it safe to start SAP now (when the CDCLS job runs) and then try to activate the table RFBLG?
Regards
Senthil
Edited by: J. Senthil Murugan on Nov 3, 2008 1:40 AM
Edited by: J. Senthil Murugan on Nov 3, 2008 3:37 AMDear Senthil
I had faced this issue earlier.
Table BSEG Requires activity in the ACT phase, like activation etc.
If we do the ACT phase using the transports and not perform manual activation of this table, this issue arrives.
Please share the relevant information--- seems some steps are missed out or not carried properly in the CU&UC phase.
Otherways, we had applied the solution Note 991401 - SYSCOPY EXPORT FAILS:SAPCLUST:ERROR: Code page conversion and it worked well..
But you need to be sure, that this table was changed(activated etc) during the Upgrade till export phase.
Issue is Nametab info is created during the Upgrade phase in CU&UC and if this table is touched, that nametab info is not getting it right as the runtime object is changed.
With RADCUCNT the nametab info will be created again.
All the Best
Best Regards
Deepak Dhawan -
What actually happens @Completed filling free space info for database
Hello,
i see some strange thing with my 15.7 Ase server
Earlier for a very big db of around 1TB the recovery time would be around 15Mins( i mean for the 1TB db to come online).
Now its taking only seconds to come up.
so wanted to check what actually happens in this stage.
Started filling free space info for database 'xxx'
Completed filling free space info for database 'xxx'
The difference is that we have created new server and bcpd the data into it.
Please can someone explain.
ThanksASE keeps counters in memory of the amount of free space available on devices and segments. When ASE is shutdown cleanly (aka "politely"), these values are flushed to disk and used to initialize the in-memory counters on reboot. If ASE is shutdown abruptly, the values have to be recalculated, a process which involves either reading every OAM page in the database or every Allocation page.
-
ASE - Started filling free space info for database
Hi All
I have an ASE db that is in a RECOVERY state.
This the last communication in the log: Started filling free space info for database 'BWP'
Does anyone know what this means?
There is a SAP BW running on ASE 15.7.
I am an SAP consultant working onsite at a client and the environment is down due to the DB being in this state.
Any ideas?
00:0002:00000:00014:2014/07/03 10:27:18.04 server Recovering database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.05 server Started estimating recovery log boundaries for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.07 server Database 'BWP', checkpoint=(249429512, 203), first=(249429512, 203), last=(249429513, 46).
00:0002:00000:00014:2014/07/03 10:27:18.07 server Completed estimating recovery log boundaries for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.07 server Started ANALYSIS pass for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.07 server Completed ANALYSIS pass for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.07 server Log contains all committed transactions until 2014/07/03 10:19:12.65 for database BWP.
00:0002:00000:00014:2014/07/03 10:27:18.07 server Started REDO pass for database 'BWP'. The total number of log records to process is 81.
00:0002:00000:00014:2014/07/03 10:27:18.14 server Completed REDO pass for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.14 server Timestamp for database 'BWP' is (0x0004, 0xd609797b).
00:0002:00000:00014:2014/07/03 10:27:18.14 server Recovery of database 'BWP' will undo incomplete nested top actions.
00:0002:00000:00014:2014/07/03 10:27:18.14 server Started recovery checkpoint for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.14 server Completed recovery checkpoint for database 'BWP'.
00:0002:00000:00014:2014/07/03 10:27:18.14 server Started filling free space info for database 'BWP'.
ASE VERSION:
Adaptive Server Enterprise/15.7/EBF 22779 SMP SP122 /P/x86_64/Enterprise Linux/ase157sp12x/3662/64-bit/FBO/Sat Apr 19 05:48:19 2014
Any suggestions on what to do?
JASE tracks the free space available on each segment in memory.
If the server is shut down politely, ASE can store the current values on disk and retrieve them at startup. However, if the server is shutdown abruptly (shutdown with nowait, crash, power failure, kill -9, etc.) the free space figures don't get written out. In that case ASE has to recalculate the free space values by reading all the allocation pages or OAM pages in the database. On a big database, that can take time.
Your main choices are to
1) wait it out
2) set the "no freespace accounting" database option and reboot
Disabling free-space accounting for data segments
While recovery will be much faster with freespace accounting turned off, there are side effects such as unexpected 1105 errors (no free space...) and thresholds not firing as expected. In general I'd advise waiting it out and trying to avoid the use of "shutdown with nowait" going forward (which may or may not be what brought the server down, but it is the main cause you can control).
-bret -
Sample report for filling the database table with test data .
Hi ,
Can anyone provide me sample report for filling the database table with test data ?
Thanks ,
Abhi.hi
the code
data : itab type table of Z6731_DEPTDETAIL,
wa type Z6731_DEPTDETAIL.
wa-DEPT_ID = 'z897hkjh'.
wa-DESCRIPTION = 'computer'.
append wa to itab.
wa-DEPT_ID = 'z897hkjhd'.
wa-DESCRIPTION = 'computer'.
append wa to itab.
loop at itab into wa.
insert z6731_DEPTDETAIL from wa.
endloop.
rewards if helpful -
Hi,
Our reports run very slow in SharePoint Integrated mode. I look at the logs and I see that for one report request there are bunch of "Forced due to logging gap" rows. Is it some kind of a configuration issue or is it working as it should. Below
is the full log for one report request. I have no admin experience at all and I am confused what each one in the logs mean. I have made some of them in bold font as I suspect it is some kind of configuration issue there. Could you please help understanding
those logs?
Line 25: 07/29/2014 09:17:34.54
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Topology
e5mb
Medium
WcfReceiveRequest: LocalAddress: 'http://xxxxx.com:32843/6ce1fa50211546eeabe466d78f0d32a6/ReportStreaming.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: 'http://schemas.microsoft.com/sqlserver/reporting/2011/06/01/ReportServer/Streaming/ProcessStreamRequest'
MessageId: 'urn:uuid:10b1c246-8c70-400b-9b62-fe0d87a2ae8c'
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 28: 07/29/2014 09:17:34.54
w3wp.exe (0x0A6C)
0x070C
SQL Server Reporting Services
Report Server WCF Runtime
0
Medium
Entering ExeuteCommand - Command = Render
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 29: 07/29/2014 09:17:34.54
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Authentication Authorization
agb9s
Medium
Non-OAuth request. IsAuthenticated=False, UserIdentityName=, ClaimsCount=0
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 30: 07/29/2014 09:17:34.54
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
General
adyrv
High
Cannot find site lookup info for request Uri http://xxx:32843/6ce1fa50211546eeabe466d78f0d32a6/ReportStreaming.svc.
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 31: 07/29/2014 09:17:34.59
w3wp.exe (0x0A6C)
0x070C
SQL Server Reporting Services
Report Server Catalog
0
Medium
RenderForNewSession('https://xxx.com/xxx.rdl')
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 32: 07/29/2014 09:17:34.60
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Authentication Authorization
ajmmt
High
[Forced due to logging gap, cached @ 07/29/2014 09:17:34.59, Original Level: VerboseEx] SPRequestParameters: AppPrincipal={0}, UserName={1}, UserKye={2}, RoleCount={3}, Roles={4}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 33: 07/29/2014 09:17:34.60
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
8acb
High
[Forced due to logging gap, Original Level: VerboseEx] Reverting to process identity
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 34: 07/29/2014 09:17:34.62
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
General
adyrv
High
Cannot find site lookup info for request Uri http://xxx:32843/6ce1fa50211546eeabe466d78f0d32a6/ReportStreaming.svc.
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 35: 07/29/2014 09:17:34.74
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
7t61
High
[Forced due to logging gap, cached @ 07/29/2014 09:17:34.63, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 36: 07/29/2014 09:17:34.74
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
General
6t8b
High
[Forced due to logging gap, Original Level: Verbose] Looking up {0} site {1} in the farm {2}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 37: 07/29/2014 09:17:34.85
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Runtime
afu6a
High
[Forced due to logging gap, cached @ 07/29/2014 09:17:34.77, Original Level: VerboseEx] No SPAggregateResourceTally associated with thread.
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 38: 07/29/2014 09:17:34.85
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Runtime
afu6b
High
[Forced due to logging gap, Original Level: VerboseEx] No SPAggregateResourceTally associated with thread.
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 39: 07/29/2014 09:17:34.94
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
7t61
High
[Forced due to logging gap, cached @ 07/29/2014 09:17:34.88, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 40: 07/29/2014 09:17:34.94
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
7t61
High
[Forced due to logging gap, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 103: 07/29/2014 09:18:05.55
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
7t61
High
[Forced due to logging gap, cached @ 07/29/2014 09:17:34.96, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 104: 07/29/2014 09:18:05.55
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
General
6t8b
High
[Forced due to logging gap, Original Level: Verbose] Looking up {0} site {1} in the farm {2}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 105: 07/29/2014 09:18:05.62
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
General
6t8h
High
[Forced due to logging gap, cached @ 07/29/2014 09:18:05.55, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 106: 07/29/2014 09:18:05.62
w3wp.exe (0x0A6C)
0x070C
SharePoint Foundation
Database
7t61
High
[Forced due to logging gap, Original Level: Verbose] {0}
894aa99c-9fb3-c007-caf4-32ba68af9901
Line 107: 07/29/2014 09:18:05.62
w3wp.exe (0x0A6C)
0x070C
SQL Server Reporting Services
Report Server WCF Runtime
0
Medium
Processed report. Report='https://xxx.com/xxx.rdl', Stream=''
894aa99c-9fb3-c007-caf4-32ba68af9901
Thank you.That is likely the case as native mode is quicker than SharePoint-integrated.
A couple of things to check, and perhaps change if you can:
1) Make sure the latest SSRS ReportViewer is deployed to the SharePoint WFEs. This does not have to match your SSRS deployment version. See http://msdn.microsoft.com/en-us/library/gg492257.aspx
for the compatibility table.
2) If possible, move SSRS to the same SharePoint servers that end users interact with (e.g. the "WFEs").
3) Make sure the underlying server hosting SharePoint has C-States disabled (these are Intel C-States, e.g. C1, C2, C3... aka processor sleep states). If the server has any sort of power regulation, make sure it is at maximum performance (rather than some
form of power saving).
4) Make sure the underlying Windows OS has it's Power Management set to Maximum, rather than any sort of power savings.
You can review the execution logs in the Reporting Services database. Take a look at the ExecutionLog3 view. Compare the TimeProcessing (time it takes to process the report) and TimeRendering (time it takes for the processed report to render to the end user).
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Error when create schemas for webcenter using URM.
Hi,
I use Oracle Urm to create necessary schemas for Webcenter. Everything goes well when I create the tables.
However, when I extend my domain with Webcenter, comes to the database connection testing step, the tests always fail.
It said: A connection is establish but no result received from the Test query.
I ignored the test but then later, start the WLS_Spaces manage server, the server can't start because of database error, the needed tables not found.
I use SQl Developer to connect to the database but can't see those newly created tables.
But during the URM step, nothing went wrong.
Can you help me with this?
Thank you very much.Hi Heather,
The reason that you are receiving this error is because the niscope.h file (called by niScope.fp) uses a struct which cannot be compiled into a DLL. This means that the niScope.fp file cannot be included in the target settings. Here's a knowledgebase that describes the error.
http://digital.ni.com/public.nsf/websearch/AC028D9586E947F08625661E006A182F?OpenDocument
If you do want the niScope.fp file to be included then you will need to make some modifications to the niscope.h file and create a typedef for the niScope_wfmInfo struct. Here's info from the help file that describes the type library section and the use of the .fp file.
"Type Library—This button lets you choose whether to add a type library resource to your DLL. Also, you can choose to include links in the type library resource to a Windows help file. LabWindows/CVI generates the type library resource from a function panel (.fp) file. You must specify the name of the .fp file. You can generate a Windows help file from the .fp file by using the Generate Windows Help command in the Options menu of the Function Tree Editor window.
This feature is useful if you intend for your DLL to be used from Visual Basic."
If you do not include the niScope.fp file then you will be able to compile the DLL.
Hope this helps! Let me know if you have any questions.
Erick -
Team , Thanks for looking into this ..
As a last resort on optimizing my stored procedure ( Below ) i wanted to create a Selective XML index ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
Index feature is not supported for the current database version.. How ever
EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
Is there ANY alternative way i can optimize below stored proc ?
Thanks in advance for your response(s) !
/****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- EXEC [dbo].[MN_Process_DDLSchema_Changes]
ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
AS
BEGIN
SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select getdate() as getdate_0
DECLARE @XML XML , @Prev_Insertion_time DATETIME
-- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
-- PRINT '1'
CREATE TABLE #Temp
EventName VARCHAR(100),
Time_Stamp_EE DATETIME,
ObjectName VARCHAR(100),
ObjectType VARCHAR(100),
DbName VARCHAR(100),
ddl_Phase VARCHAR(50),
ClientAppName VARCHAR(2000),
ClientHostName VARCHAR(100),
server_instance_name VARCHAR(100),
ServerPrincipalName VARCHAR(100),
nt_username varchar(100),
SqlText NVARCHAR(MAX)
CREATE TABLE #XML_Hold
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
BufferXml XML
select getdate() as getdate_01
INSERT INTO #XML_Hold (BufferXml)
SELECT
CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
FROM sys.dm_xe_session_targets xet
INNER JOIN sys.dm_xe_sessions xes
ON xes.address = xet.event_session_address
WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
--RETURN
--SELECT * FROM #XML_Hold
select getdate() as getdate_1
-- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
FOR
PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
--RETURN
--CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
--SELECT GETDATE() AS GETDATE_2
-- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
--CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
--USING XML INDEX [IX_XML_Hold]
---- FOR VALUE
-- --FOR PROPERTY
-- FOR PATH
--SELECT GETDATE() AS GETDATE_3
--PRINT '2'
-- RETURN
SELECT GETDATE() GETDATE_3
INSERT INTO #Temp
EventName ,
Time_Stamp_EE ,
ObjectName ,
ObjectType,
DbName ,
ddl_Phase ,
ClientAppName ,
ClientHostName,
server_instance_name,
nt_username,
ServerPrincipalName ,
SqlText
SELECT
p.q.value('@name[1]','varchar(100)') AS eventname,
p.q.value('@timestamp[1]','datetime') AS timestampvalue,
p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
FROM #XML_Hold
CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
SELECT GETDATE() GETDATE_4
-- SELECT * FROM #TEMP
-- SELECT COUNT(*) FROM #TEMP
-- SELECT GETDATE()
-- RETURN
-- PRINT '3'
--RETURN
INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
[UserName]
,[DbName]
,[ObjectName]
,[client_app_name]
,[ClientHostName]
,[ServerName]
,[SQL_TEXT]
,[EE_Time_Stamp]
,[Event_Name]
SELECT
CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
ELSE T.nt_username
END
,T.DbName
,T.objectname
,T.clientappname
,t.ClientHostName
,T.server_instance_name
,T.sqltext
,T.Time_Stamp_EE
,T.eventname
FROM
#TEMP T
/** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
-- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
WHERE ddl_Phase ='Commit'
AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
AND MN.[DbName] = T.DbName
AND MN.[Event_Name] = T.EventName
AND MN.[ObjectName]= T.ObjectName
AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
-- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
--SELECT GETDATE()
--PRINT '4'
--RETURN
SELECT
top 100
[EE_Time_Stamp]
,[ServerName]
,[DbName]
,[Event_Name]
,[ObjectName]
,[UserName]
,[SQL_TEXT]
,[client_app_name]
,[Created_Date]
,[ClientHostName]
FROM
[dbo].[MN_DDLSchema_Changes_log]
ORDER BY [EE_Time_Stamp] desc
-- select getdate()
-- ** DELETE EVENTS after logging into Physical table
-- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
-- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
-- SELECT @XML
SELECT GETDATE() GETDATE_5
END
GO
Rajkumar Yelugu@@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
(1 row(s) affected)
Compatibility level is set to 110 .
One of the limitation states - XML columns with a depth of more than 128 nested nodes
How do i verify this ? Thanks .
Rajkumar Yelugu -
Hi to all.
Currently, i am trying to install patches for portal server.
The server OS is Sun Solaris 8.
We are using Oracle9iAS.
Now, we are installing 9.0.1.4.0 patch set for the Oracle Database Server.
We managed to install the patch, but have problem with the Post Install Action.
We managed to run
-ALTER SYSTEM ENABLE RESTRICTED SESSION;
-@rdbms/admin/catpatch.sql
-ALTER SYSTEM DISABLE RESTRICTED SESSION;
-CONNECT / AS SYSDBA
-update obj$ set status=5 where type#=29 and owner#!=0;
-commit;
But, when we come to the next command, which is to shutdown, it gives us like
this..
SQL> update obj$ set status=5 where type#=29 and owner#!=0;
1402 rows updated.
SQL> commit;
Commit complete.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database not open: queries allowed on fixed tables/views only
We tried to startup the database..it gives us this error..
SQL> startup
ORA-01081: cannot start already-running ORACLE - shut it down first
So, we tried to shutdown again..
SQL> shutdown immediate
ORA-01089: immediate shutdown in progress - no operations are permitted
I been informed that this is maybe a Database problem related. Any ideas?
Best Wishes,
Rushdan Md Saad.Patchsets could be obtained (only) from http://metalink.oracle.com
You need to have valid CSI for access.
P.S: Sorry Werner, I didn't see you post.
Message was edited by:
Ivan Kartik -
How can we suggest a new DBA OCE certification for very large databases?
How can we suggest a new DBA OCE certification for very large databases?
What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
The largest databases that I have ever worked with barely over 1 Trillion Bytes.
Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
I could guess that maybe some of the following topics of how to configure might be on it,
* Partitioning
* parallel
* bigger block size - DSS vs OLTP
* etc
Where could I send in a recommendation?
Thanks RogerI wish there were some details about the OCE data warehousing.
Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
Overview of Data Warehousing
Describe the benefits of a data warehouse
Describe the technical characteristics of a data warehouse
Describe the Oracle Database structures used primarily by a data warehouse
Explain the use of materialized views
Implement Database Resource Manager to control resource usage
Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
Parallelism
Explain how the Oracle optimizer determines the degree of parallelism
Configure parallelism
Explain how parallelism and partitioning work together
Partitioning
Describe types of partitioning
Describe the benefits of partitioning
Implement partition-wise joins
Result Cache
Describe how the SQL Result Cache operates
Identify the scenarios which benefit the most from Result Set Caching
OLAP
Explain how Oracle OLAP delivers high performance
Describe how applications can access data stored in Oracle OLAP cubes
Advanced Compression
Explain the benefits provided by Advanced Compression
Explain how Advanced Compression operates
Describe how Advanced Compression interacts with other Oracle options and utilities
Data integration
Explain Oracle's overall approach to data integration
Describe the benefits provided by ODI
Differentiate the components of ODI
Create integration data flows with ODI
Ensure data quality with OWB
Explain the concept and use of real-time data integration
Describe the architecture of Oracle's data integration solutions
Data mining and analysis
Describe the components of Oracle's Data Mining option
Describe the analytical functions provided by Oracle Data Mining
Identify use cases that can benefit from Oracle Data Mining
Identify which Oracle products use Oracle Data Mining
Sizing
Properly size all resources to be used in a data warehouse configuration
Exadata
Describe the architecture of the Sun Oracle Database Machine
Describe configuration options for an Exadata Storage Server
Explain the advantages provided by the Exadata Storage Server
Best practices for performance
Employ best practices to load incremental data into a data warehouse
Employ best practices for using Oracle features to implement high performance data warehouses
Maybe you are looking for
-
How to disable the FULL SCREEN option in the 'Insert Video and Audio' option
Hi, I've been inserting some MP3's and using an image for pupils to click on to play the sound, the images are 100 pixels by 100 pixels. The only issue is that I have the grey rectangle 'Full Screen' box over the picture. This is ideal when I put vid
-
Active users on application server portal
how can i find out the information of active users on portal? Please help me
-
Hello, I have a requirement to add authority check on two fields "Sales Organization" & "Plant" in a ABAP Query. Please let me know how can I do it? Will I be required to add some "Authority-check" code in sq02 or is there any button/checkbox
-
Assignment of activity group to work center
Hi, We want to assign activity type to work center more than six, how we can assign activity type more than six to work center? i have created activity type group in klh1, in activity group i have assigned activity type more than six, How we can assi
-
i am getting a massive amount of junk mail that i never used to get. this just started in 2012. any suggestions? Thanks.-Craig