Parameter changes in RAC
Hi,
We have two node RAC and presently up and running. We are going to set single node standby database for the two node RAC. We need to modify some initialization parameter in RAC. Is this possible to shutdown one instance and restart following this to restart second instance. So that RAC database will be available for the application without shutdown the RAC database and restart to reflect the initialization parameter.
Is this possible to shutdown one instance and restart following this to restart second instance.
It depends on the parameter(s) that is being changed. For most of them, yes, this is possible. But if your "primary" is not yet in archive log mode, then you'll need to issue the ALTER DATABASE command to start archiving and you'll don't want one instance up and running while the other is making the change.
Cheers,
Brian
Similar Messages
-
RAC 11g Rel2 - clusterware wont come up after ASM parameter changes Help!
I was tring to make the below parameter change in ASM, but got the below error message
SQL> alter system set db_cache_size=200M scope=spfile sid='*';
alter system set db_cache_size=200M scope=spfile sid='*'
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-00384: Insufficient memory to grow cache
Per note 737458.1
You need to unset SGA_TARGET, restart your database and modify DB_CACHE_SIZE, and then reset SGA_TARGET as before.
So I issued the following in ASM.
ASM> alter system set memory_max_target=0 scope=spfile sid='*';
System altered.
ASM> alter system set memory_target=0 scope=spfile sid='*';
System altered.
hutdown the clusterware and asm using crsctl stop cluster -all
Attempted to start the clusterware using crsctl start cluster -all , received the below error messages
CRS-2672: Attempting to start 'ora.cssd' on 'server100'
CRS-2672: Attempting to start 'ora.diskmon' on server100'
CRS-2676: Start of 'ora.diskmon' on 'server100' succeeded
CRS-2676: Start of 'ora.cssd' on server100' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on server100'
CRS-2676: Start of 'ora.ctssd' on server100'
succeeded
CRS-2672: Attempting to start 'ora.asm' on server100'
ORA-00843: Parameter not taking MEMORY_MAX_TARGET into account
CRS-2674: Start of 'ora.asm' on 'server100' failed
CRS-2679: Attempting to clean 'ora.asm' on server100'
CRS-2681: Clean of 'ora.asm' on 'server100' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on server100'
CRS-2677: Stop of 'ora.ctssd' on 'server100' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on server100'
CRS-2677: Stop of 'ora.diskmon' on 'server100' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on server100'
CRS-2677: Stop of 'ora.cssd' on server100' succeeded
CRS-4000: Command Start failed, or completed with errors.
The OCR and voting disk is stored in an ASM disk group. ASM did not like the parameter changes and threw the above errors. I have a backups available.
Can anyone tell me how to revert these parameters ?
I am unable to issue any comamnds becuse the cluster is down
SQL> create pfile ='/home/p.ora' from spfile;
create pfile ='/home/p.ora' from spfile
ERROR at line 1:
ORA-29701: unable to connect to Cluster Synchronization Service
ORA-29701: unable to connect to Cluster Synchronization Service
SQL>
Thanks
DNJ
Edited by: user11177991 on Aug 5, 2011 9:45 AMuser11177991 wrote:
Can anyone tell me how to revert these parameters ?
I am unable to issue any comamnds becuse the cluster is down
SQL> create pfile ='/home/p.ora' from spfile;
create pfile ='/home/p.ora' from spfile
ERROR at line 1:
ORA-29701: unable to connect to Cluster Synchronization Service
ORA-29701: unable to connect to Cluster Synchronization Service
Thanks
DNJ
Hi DNJ,
What you need to do is create temporary pfile with parameters like below
*.asm_diskgroups='DATA','FRA'
*.asm_diskstring='ORCL:*'
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE' modify the above diskgroups to reflect your diskgroup configuration
then start the asm using that temporary pfile
SQL > startup pfile='/tmp/asmpfile.ora'once the asm instance started you can recreate the spfile
create spfile='+DATA' from pfile='/tmp/asmpfile.ora'shutdown the asm and start it back again
Hope this helps
Cheers -
Modify parameter file ORACLE RAC 11G
Dear all
I have a production ORACLE RAC 11G database. i want to modify shared spfile parameter file in RAC .But spfile is shared file which is on ASM.
how to achive it using pfile, actually i want to change in pfile and then make a spfile in ASM which is shared by all instances
Please reply soon846671 wrote:
Dear all
I have a production ORACLE RAC 11G database. i want to modify shared spfile parameter file in RAC .But spfile is shared file which is on ASM.
how to achive it using pfile, actually i want to change in pfile and then make a spfile in ASM which is shared by all instances
Please reply soonYou know that the modifications you'have done on spfile are not active until you restart DB. So why don't you want to modify the spfile by "alter system ... scope=spfile" commands? I don't see the point to create pfile, modify it and recreate spfile...
http://download.oracle.com/docs/cd/B28359_01/install.111/b28264/params.htm#CIHIDAHD -
Resource_manager_plan and Reserved pool size parameter changing every time
Hello All,
In my production database (Oracle 11g RAC )Resource_manager_plan and Reserved pool size parameter changing every time .
Below is my question .
This parameter changed automatically or it require manual intervention .
In what case this parameter changed if it automatically changed.
I had checked dba_hist_parameter a, dba_Hist_snapshot b table for parameter changed history parameter changed .
This parameter linked with process and sql performances?.
Please help me . Thanks .
Regards
RanjeetWhen scheduler window opens, its resource plan becomes active. For example, MONDAY_WINDOW begins on monday at 22:00. At this time current plan is changed to DEFAULT_MAINTENANCE_PLAN. At 00:00 (Tuesday) plan that was active before monday 22:00, becomes active. DEFAULT_MAINTENANCE_PLAN is used for Autotask clients :
select client_name,WINDOW_GROUP from DBA_AUTOTASK_CLIENT ;
CLIENT_NAME WINDOW_GROUP
auto optimizer stats collection ORA$AT_WGRP_OS
auto space advisor ORA$AT_WGRP_SA
sql tuning advisor ORA$AT_WGRP_SQ
select * from DBA_SCHEDULER_WINGROUP_MEMBERS where WINDOW_GROUP_NAME in (select WINDOW_GROUP from DBA_AUTOTASK_CLIENT);
WINDOW_GROUP_NAME WINDOW_NAME
ORA$AT_WGRP_OS MONDAY_WINDOW
ORA$AT_WGRP_OS TUESDAY_WINDOW
ORA$AT_WGRP_OS WEDNESDAY_WINDOW
ORA$AT_WGRP_OS THURSDAY_WINDOW
ORA$AT_WGRP_OS FRIDAY_WINDOW
ORA$AT_WGRP_OS SATURDAY_WINDOW
ORA$AT_WGRP_OS SUNDAY_WINDOW
ORA$AT_WGRP_SA MONDAY_WINDOW
ORA$AT_WGRP_SA TUESDAY_WINDOW
ORA$AT_WGRP_SA WEDNESDAY_WINDOW
ORA$AT_WGRP_SA THURSDAY_WINDOW
ORA$AT_WGRP_SA FRIDAY_WINDOW
ORA$AT_WGRP_SA SATURDAY_WINDOW
ORA$AT_WGRP_SA SUNDAY_WINDOW
ORA$AT_WGRP_SQ MONDAY_WINDOW
ORA$AT_WGRP_SQ TUESDAY_WINDOW
ORA$AT_WGRP_SQ WEDNESDAY_WINDOW
ORA$AT_WGRP_SQ THURSDAY_WINDOW
ORA$AT_WGRP_SQ FRIDAY_WINDOW
ORA$AT_WGRP_SQ SATURDAY_WINDOW
ORA$AT_WGRP_SQ SUNDAY_WINDOW -
How to transport Parameter changes in a crystal report
Hi All,
Very Good morning!!!
I have designed a crystal report with static parameters. Earlier i used to have a dropdown kind of input selection for my parameters.
Now i got a new requirement for a direct input in the field....tht means no dropdown ...single date field is to be entered directly.
Accordingly i have removed the dropdown and changed to a single direct date field. I saved these changes to a request and transported to quality. Not sure whether the parameter changes are collected into a request.
Whereas i couldn't found any changes of my parameters in quality. They are as same dropdown manner in the quality whereas i need them to be a direct field date entry which did not affected the quality server after transporting the changes.
Could some one please let me know how to reflect these changes in quality server regarding parameter changes in a crystal report for BW.
Thanks in Advance.
JitendraPlease re-post if this is still an issue or purchase a case and have a dedicated support
engineer work with your directly -
ADF Task Flow Binding - Refresh ifNeeded being invoked even WITHOUT any Parameter change
Using JDeveloper 11.1.1.6.0
Issue: Task Flow Binding property "refresh = ifNeeded" seems to be triggered even without the mutation of the input parameter.
As per definition, "ifNeeded: refresh the ADF Region if the value of a task flow binding parameter changes." (Reference: 17.5 Refreshing an ADF Region)
Now for the setup which reproduces the issue.
I'll focus at the fragment bounded task flow level and will skip the jspx side.
taskflow: main-flow.xml
contains a single fragment mainFgmt.jsff
has a managed bean defined SampleBean.java as pageFlow scope.
taskflow: sub-flow.xml
contains a single fragment subFgmt.jsff
has an inputParameter SampleBean.java (because it is an input parameter, by default it will be at pageFlow scope)
!important - has a nested taskflow (task flow call as defined in the component pallete) called inner-flow (see below)
the nested taskflow is the default activity
the nested taskflow has an outcome pointing to subFgmt where outcome = "return"
taskflow: inner-flow.xml
contains a single fragment called stop.jsff
has a return activity without outcome = "return"
stop.jsff has a navigation pointing to the outcome.
Finally mainFgmt.jsff has a task flow binding (pageDef)
with id = "sub-flow.xml"
refresh = "ifNeeded"
parameter SampleBean being submitted as sub-flow's input parameter. (id=sampleBean, value=#{pageFlowScope.mainSampleBean})
Assume that code compiles.
In this scenario where the only tricky condition is the inner nesting (defined by !important), when the inner nesting decides to invoke its outcome to visit sub-flow's fragment, mainFgmt is restarting its taskflow which makes sub-flow start over again.
Another way of saying it is, if sub-flow starts a nested activity and that nested activity exits out to utilize sub-flow's view. The high level definition which is mainFgmt's refresh ifNeeded is restarting sub-flow.
In the above example if you notice, the bean (SampleBean) is not really being utilized except that it is completing the purpose of refresh=ifNeeded. This scenario is only to simplify the setup - in practical use this bean will be mutated to be utilized as a refresh mechanism.
Now interestingly, if I change the pattern a bit then the issue will not happen:
Don't use the nested taskflow (inner-flow) as the default activity, let a fragment of sub-flow hold the initial view.
Navigate to the nested flow.
Exit nested flow.
Everything works.
Now in this scenario, it seems like the sub-flow needs to have a view established first for it to be properly be used.
So my questions are as follows:
Can I consider the behavior of the refresh=ifNeeded as a bug in this usecase?
Would it be better to utilize a different way of refreshing (maybe combination of refresh condition) to get around the issue?
Is the use of the task flow as defined logical or does it cross any boundary or best practice that might be causing this behavior?Hi,
actually you lost me in your description due to complexity. I lived under assumption that sub-flow already is a region on a view in main flow, but then you sad that
"!important - has a nested taskflow (task flow call as defined in the component pallete) called inner-flow (see below)"
which then confused me as to I have no idea if inner flow now is the second level nesting or first level nesting (should be second level nesting). If sub-flow is a region then having "has an inputParameter SampleBean.java (because it is an input parameter, by default it will be at pageFlow scope)" is an unnecessary broad scope because the region wont live longer than view scope.
Anyway, it seems that a region refresh is triggered by the lifecycle involved, which can be by design or a bug. I suggest you file a Service Request with support and provide a test case as purely from the description, its hard to parse and understand what is going on.
Frank -
Execute subroutine only when selection parameter changes
Hi ABAP workers,
I have a block of selection parameters, and I created the event AT SELECTION SCREEN ON BLOCK bl1 with a subroutine. I want that subroutine to be executed only when a parameter in the selection block is changed by the user. But the behaviour right now is that it executes every time I press Enter, even if no parameter changes.
Is there any way to chieve this (like in the module pool case, with the extension "ON REQUEST")?
Thank you very much
Ivson
Code involved:
SELECTION-SCREEN BEGIN OF BLOCK bl1 WITH FRAME TITLE text-001.
PARAMETERS: p_bukrs LIKE csks-bukrs MEMORY ID buk OBLIGATORY,
p_ryear LIKE glpct-ryear OBLIGATORY.
SELECT-OPTIONS: s_poper FOR glpct-rpmax,
s_racct FOR glpct-racct,
s_kunnr FOR glpca-kunnr,
s_lifnr FOR glpca-lifnr,
s_sprctr FOR glpct-sprctr.
SELECTION-SCREEN END OF BLOCK bl1.
AT SELECTION-SCREEN ON BLOCK bl1.
PERFORM preselect.Hi,
You could try this.
Use the FM "DYNP_VALUES_READ" to get the contents of that screen parameter and then check for the parameter value inside the subroutine using a IF statement.
PNAME is a paramter name here.
a dynpfields-fieldname = 'PNAME'.
append dynpfields.
repid = sy-repid.
call function 'DYNP_VALUES_READ'
exporting
dyname = repid
dynumb = sy-dynnr
tables
dynpfields = dynpfields
exceptions
others.
read table dynpfields index 1.
pname = dynpfields-fieldvalue.
Process the subroutine if needed based on the check condition.
Hope this helps you.
Regards,
Subbu -
Headstart Business Rule IN parameter changes value
I've created a Business Rule (CEV rule � triggered by create) with following code:
IN parameters: p_datum_van, p_cat and p_prest
cursor c_cap is select datum_tot
from cew_cat_prest
where . . .
and p_datum_van between datum_van and datum_tot
and datum_tot is not null;
l_datum_tot cew_cat_prest.datum_tot%type;
begin
dbms_output.put_line('BR5 p_datum_van '||p_datum_van);
open c_cap;
fetch c_cap into l_datum_tot;
if c_cap%found then
dbms_output.put_line('BR5 eerste controle p_datum_van '||p_datum_van);
update cew_cat_prest
set datum_tot = p_datum_van - 1
where cat = p_cat
and prest = p_prest
and datum_tot = l_datum_tot;
dbms_output.put_line('BR5 tweede controle p_datum_van '||p_datum_van);
update cew_cat_prest
set datum_tot = l_datum_tot
where cat = p_cat
and prest = p_prest
and datum_van = p_datum_van;
dbms_output.put_line('BR5 derde controle p_datum_van '||p_datum_van);
The updates are not executed correctly since p_datum_van changes !!!
Eerste controle: p_datum is correct
Tweede controle: p_datum has changed its value
What we do:
Record 1: van (from) 01/01/2003 tot (to) 31/01/2003
Record 2: van 01/02/2003 tot <null>
Add new record: van 15/01/2003
What we want:
Record 1: van 01/01/2003 tot 14/01/2003
Record 3: van 15/01/2003 tot 31/01/2003
Record 2: van 01/02/2003 tot <null>
Eerste controle: parameter p_datum_van = 15/01/2003
Tweede controle: parameter p_datum_van = 01/01/2003
Remark: if we use the TAPI procedure ups instead of updates, we have the same problem
How is it possible that an IN parameter changes its value during execution of a procedure ???Tim,
Headstart should be able to check out any tables you need.
We think we have the user settings
done correctly, but there may be something that we
have missed. How can we get Headstart to
automatically check-out the table(s) for which we
want to run the Business Rules design transformer?The user settings you need are (see also pages 5-7 and 5-10 in the Headstart User's Guide):
- Under 'Process the following objects', choose 'Checked out by anyone'
- Also check the check box '... also Checked In objects'
- Choose whether you want to automatically check out with or without lock
If this does not help, please run the utility with log level 'Debug Detailed' (can also be set in the User Preferences) and report the last few lines of the log messages. They should give an indication of why the check out does not succeed.
Hope this helps,
Sandra Muller -
Parameter changes prevent SAP from starting
Hello all,
We are running SAP ERP 4.7ext 2.00 with two nodes in a Windows/MSSQL Cluster environment.
1. Last night, we made parameter changes to the central instance profile. In particular, we started with adding the parameter enque/table_size = 16384, activated the profile and took the SAP R/3 RP1 Resource offline. We then tried to bring it back online but it went into a failed status. We removed this parameter using a text editor, and then retried to bring the SAP Resource online again, but to no avail it went into a failed state again.
2. We had previously made backups of the profile files created by the reinstallation of the central instance (DEFAULT, START_DVEBMGS00_ZAASAPCCI001, and RP1_DVEBMGS00_ZAASAPCCI001 lets call this File Set A.) and decided to use these files instead. These files had the default parameters created on installation. We were successful in bringing the SAP resource online.
3, We made a parameter change and activated the profile in SAP it does not matter which parameter you choose, for example, changing the number of dialog work processes from the default 2 to 20 . We took the SAP R/3 RP1 Resource offline but were unable to bring it back online - it went into a failed status.
4. We then took File Set A, and using a text editor (1) made changes to the number of work processes; and (2) appended the extra parameters to them. Using these files we were now successful in bringing the SAP Resource online.
5. If we now made any changes to these files either through SAP or by using the text editor, we are unable to restart SAP. We have to revert to step 4 above.
6. A strange anomaly we noticed is that if the profile files have the following commented lines in the beginning of the file RP1_DVEBMGS00_ZAASAPCCI001. for example:
#.* Instance profile RP1_DVEBMGS00_ZAASAPCCI00 *
#.* Version = 000007 *
#.* Generated by user = ABOOM *
#.* Generated on = 11.05.2006 , 09:04:07 *
Then we are unable to restart SAP.
Is this a bug related to running a dual-node SAP cluster? We were previously able to make parameter changes (either through SAP or through the use of a text editor) and restart SAP successfully.
It is now become critical to determine the cause of this anomalous behaviour and resolve the problem. Failing this, the client wants to break the cluster and revert to a distributed SAP system installation (with separate DB and central instance hosts).
Your comments and help wil be greatly appreciated.
Regards,
LeboHi Lebo,
Can you try to edit your profiles (the correct ones) using sappad tool (/usr/sap/<SID>/<INST-ID>/run/exe/) and save it in the same format that was used to open it.
Regards,
Mike -
History of Profile Parameter Changes!!!
Hi,
Please help with any way by which i can find the parameter changes in profile.I referred to other post and found TU02 tcode but this does not help.
Basically we are trying find from particular to till now the list of parameter changes happened.
satishHi,
Go to RZ10
Choose the profile.
Go to Extended maintenance
Choose - Display.
Then Select Go TO - Detailed List
Again Click Go to- Detailed List
You can see the history of the parameter change.
Regards
Edited by: bhuban_2010 on Aug 5, 2011 5:35 AM -
Hi,
I have a question about DDL or DML changes in RAC? Assume we have 2 node RAC and If I issue an update stmt in one instance and issues same update stmt in another instance, which will take effect?Hi,
It means first commit won't take effect?It will, but only until you issue the second update/commit. RAC is more than instance accessing the same database (same tables).
Regards,
Hussein -
Partial Refresh of a Page (Chart Refrech : parameter change)
Hi,
I have page with som reports and 1 chart.
I would like to refresh only the chart (flash one) using a button ... So, It is OK for a simple query (the partial refresh process is working : the OTN example used). But when the query used for the chart are base in item (select *... from my_tab where cl=:P1_CHART_HIST), the refresh is not taking into consideration the parameter change (refresh needed after changed the value in the item P1_CHART_HIST...)...
I hope that you see what I mean and do you have a solution for this ...
Regardsif you provide a name for the iframe, you can set the target in the href's..
I would also provide it with an ID and Name, as you might want to do some fun JS stuff with it in the future.
You can actually with JS change the content
HTML way:
a href="http://www.google.com" target="pdfIFrame"
I would stick with the HTML-eidtion..
The iframe should work as the older frames, eg. you have to set the name in the iframe too.
Also I see there was missing a > in your iframe code.
<iframe width="100%" height="100%" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="&P1_LINK." name="pdfIFrame">
</iframe>
Maybe you should also consider styling the iframe via. CSS. Use ID or Class on the Iframe to do so.
Edited by: user9178158 on May 11, 2010 3:26 PM
Edited by: user9178158 on May 11, 2010 3:27 PM -
Hi all,
I think the default value of parallel_threads_per_cpu parameter is 2. But our system's CPU has 8 threads.
I wanna change this parameter for handling of performance issues. Can I change the parameter parallel_threads_per_cpu dynamically?Yes you can:
SQL> show parameter parallel
NAME TYPE VALUE
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2148
parallel_instance_group string
parallel_max_servers integer 0
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
NAME TYPE VALUE
recovery_parallelism integer 0
SQL> alter system set parallel_threads_per_cpu=4;
Système modifié.
SQL> select * from v$version;
BANNER
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter threads
NAME TYPE VALUE
parallel_threads_per_cpu integer 4
SQL>But are you sure this going to increase performance ? Is your system using parallel option ? Is it OLTP ? Is it RAC ?
Please poste your 4 digits Oracle version. -
Physical Memory Upgrade [SAP, Oracle parameter changes]
Hello Guru,
Good day!
I'm not sure if I'm in the correct forum, please bare with me if I'm not.
We are actually planning to increase our Production [Physical Memory] server from its current size 15360GB including [oracle, SAP & OS] to 44GB memory. Do you have any idea how can we calculate to the most needed SAP / DB parameter should be increase after we allocate the 44GB in preparation for Go-Live. Below are details of my systems [oracle version, kernel, R/3 System, OS version, SAP parameter and DB parameter.
Reason for memory upgrade: will create two client in one system with a different number of users and different plant e.g. America / Canada
======================================================================
SAP R/3 Version: SAP 4.6C
Oracle Version: 10.2.0.4.0
OS Level: AIX 5.3
orapaa> oslevel -g
Fileset Actual Level Maintenance Level
bos.rte 5.3.8.0 5.3.0.0
Physical Memory
Real,MB 15360
======================================================================
kernel release 46D
kernel make variant 46D_EXT
compiled on AIX 1 5 0056AA8A4C00
compiled for 64 BIT
compile time Aug 17 2007 10:57:49
update level 0
patch number 2337
source id 0.2337
======================================================================
orapaa> prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 06DDD01
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 4
Processor Clock Speed: 4208 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 9 SWT_AMR_SADCB335_SAP_HA_PRI
Memory Size: 15360 MB
Good Memory Size: 15360 MB
Platform Firmware level: Not Available
Firmware Version: IBM,EM340_095
Console Login: enable
Auto Restart: true
Full Core: false
======================================================================
Our current used SAP parameter
Profile parameters for SAP buffers Parameters Name Value Unit
Program buffer
abap/buffersize 850000 Kb
CUA buffer
rsdb/cua/buffersize 10000
Screen buffer
zcsa/presentation_buffer_area 20000000 Byte
Generic key table buffer
zcsa/table_buffer_area 100000000 Byte
Single record table buffer
rtbb/buffer_length 60000
Export/import buffer
rsdb/obj/buffersize 40000 kB
Table definition buffer
rsdb/ntab/entrycount 30000
Field description buffer
rsdb/ntab/ftabsize 60000 kB
Initial record buffer
rsdb/ntab/irbdsize 8000 kB
Short nametab (NTAB)
rsdb/ntab/sntabsize 3000 kB
Calendar buffer
zcsa/calendar_area 500000 Byte
Roll, extended and heap memory EM/TOTAL_SIZE_MB 6144 MB
ztta/roll_area 6500000 Byte
ztta/roll_first 1 Byte
ztta/short_area 1400000 Byte
rdisp/ROLL_SHM 16384 8 kB
rdisp/PG_SHM 16384 8 kB
rdisp/PG_LOCAL 150 8 kB
em/initial_size_MB 4092 MB
em/blocksize_KB 4096 kB
em/address_space_MB 4092 MB
ztta/roll_extension 2000000000 Byte
abap/heap_area_dia 2000000000 Byte
abap/heap_area_nondia 2000000000 Byte
abap/heap_area_total 2000000000 Byte
abap/heaplimit 40000000 Byte
abap/use_paging 0
======================================================================
Oracle Parameter
Oracle Parameter Name Value Unit
SGA_MAX_SIZE 6192 MB
PGA_AGGREGATE_TARGET 400 MB
DB_CACHE_SIZE 0
SHARED_POOL_SIZE 960 MB
LARGE_POOL_SIZE 16 MB
JAVA_POOL_SIZE 32 MB
LOG_BUFFER 14246912
db_block_buffers 655360
Thanks and regards,
MikeI feel the best way to get the parameters which needs to be adjusted is to go for EarlyWatch Check after increasing the Physical Memory of your SAP system, as we cannot say how and which parameters needs to be checked and changed.. as there is some dependiblity also between the parameters...
All the best ! -
Guten Tag
I understand there is a dictionary table/view/object that tracks changes to database parameter values but I can't seem to be able to find it.
A co-worked mentioned that information is available in OEM so I figured there must be a table somewhere that is storing it.
Any thoughts or documentation references are most welcome.
Thanks very much!
-gary>
I assume the SNAP_ID in that table must be a reference to an AWR snap of database information?
>
Why assume? Not the definitive source but see this blog about the dba_hist tables
http://it.toolbox.com/blogs/living-happy-oracle/oracle-the-dba_hist-tables-30168
>
The information in most of the DBA_HIST tables corresponds to the AWR snapshots (the SNAP_ID column tells us to which snapshot the information is relevant to) so they are populated with new information when a snapshot is taken and old information is purged along with the AWR snapshot according to the AWR retention settings.
>
The last bullet point in the article may answer your initial question. This is just the start - there is also a query you can use
>
■Did you ever wonder if someone had changed a system parameter and did not inform you? No more, the DBA_HIST_PARAMETER keeps system parameter information for all snapshots and helps the DBA track any parameter.
Maybe you are looking for
-
i need help to see if there is anyway to change my icloud password when the recovery emails are being sent to an email that i no longer have due to microsoft terminating it. The i cloud account is different to the apple id that is being used on the p
-
Weather icon that shows the current weather?
My blackberry weather app used the icon to show the current weather conditions (sunny, rain, etc). The iphone calendar icon can show the current date. Is there a weather app that can use the iphone icon to show the current weather? I haven't found on
-
My new Ipad named itself the same as my old Ipad. Now neither will be recognized in Itunes. Should I reset new Ipad to factory settings and start over? How do I name it what I want?
-
Understanding the _definst_ folder, I need clarification please
Hello all, I understand with FMIS that we place our media files inside the _definst_ folder which is located in another folder (for example, one called 'streams') which is located in the applicaitons folder (Path: .../applications/streams/_definst_ )
-
Java.lang.OutOfMemoryError during transfer of large data from SAP to PI
Hi experts, We are trying to transfer large data from SAP to external system via PI but the transfer stucked in sm58 of SAP system with 'error java.lang.OutOfMemoryError'. We have tested before this and we can only get approximately 100K of records t