In which case we require to write perform using/changing
hi,
in which case we require to write perform using/changing .
and what is excatly we r doing with perform using.changing.
please somebody help me.
thanks
subhasis
This is an awfully basic question.
Simply press F1 on PERFORM.
And responders take note:
bapi
Rob
Similar Messages
-
How to write - Perform using variable changing tables
hi Gurus,
I am facing an issue while writing a perform statement in my code.
PERFORM get_pricing(zvbeln) USING nast-objky
CHANGING gt_komv
gt_vbap
gt_komp
gt_komk.
in program zvbeln :-
FORM get_pricing USING p_nast_objky TYPE nast-objky
tables p_gt_komv type table komv
p_gt_vbap type table vbapvb
p_gt_komp type table komp
p_gt_komk type table komk.
BREAK-POINT.
DATA: lv_vbeln TYPE vbak-vbeln.
MOVE : p_nast_objky TO lv_vbeln.
CALL FUNCTION '/SAPHT/DRM_ORDER_PRC_READ'
EXPORTING
iv_vbeln = lv_vbeln
TABLES
et_komv = p_gt_komv
et_vbap = p_gt_vbap
et_komp = p_gt_komp
et_komk = p_gt_komk.
ENDFORM. " GET_PRICING
But its giving an error . please let me know how i can solve this .Hi,
Please incorporate these changes and try.
perform get_pricing(zvbeln) TABLES gt_komv gt_vbap gt_komp gt_komk
USING nast-obky.
in program zvblen.
Form get_pricing TABLES p_gt_komv type table komv
p_gt_vbap type table vbapvb
p_gt_komp type table komp
p_gt_komk type table komk
USING p_nast_objky TYPE nast-objky.
REST OF THE CODE SAME.
End form.
Note : Please check lv_vbeln after the move statement.
Hope this will help you.
Regards,
Smart Varghese -
NFS write performance 6 times slower than read
Hi all,
I built my self a new homeserver and want to use NFS to export stuff to the clients. Problem is that I get a big difference in writing and reading from/to the share. Everything is connected by GBit Network, and raw network speed is fine.
Reading on the clients yields about 31MByte/s which is almost the native speed of the disks (which are luks-encrypted). But writing to the share gives only about 5.1MByte/s in the best case. Writing to the disks internally gives about 30MByte/s too. Also writing with unencrypted rsync from the client to the server gives about 25-30MByte/s, so it is definitely not a network or disk problem. So I wonder if there is anything that I could do to improve the Write-Performance of my NFS-shares. Here is my config which gives the best results so far:
Server-Side:
/etc/exports
/mnt/data 192.168.0.0/24(rw,async,no_subtree_check,crossmnt,fsid=0)
/mnt/udata 192.168.0.0/24(rw,async,no_subtree_check,crossmnt,fsid=1)
/etc/conf.d/nfs-server.conf
NFSD_OPTS=""
NFSD_COUNT="32"
PROCNFSD_MOUNTPOINT=""
PROCNFSD_MOUNTOPTS=""
MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
NEED_SVCGSSD=""
SVCGSSD_OPTS=""
Client-Side:
/etc/fstab
192.168.0.1:/mnt/data /mnt/NFS nfs rsize=32768,wsize=32768,intr,noatime 0 0
Additional Infos:
NFS to the unencrypted /mnt/udata gives about 20MByte/s reading and 10MByte/s writing.
Internal Speed of the discs is about 37-38MByte/s reading/writing for the encrypted one, and 44-45MByte/s for the unencrypted (notebook-hdd)
I Noticed that the load average on the server goes over 10 while the CPU stays at 10-20%
So if anyone has any idea what might go wrong here please let me know. If you need more information I will gladly provide it.
TIA
seiichiro0185
Last edited by seiichiro0185 (2010-02-06 13:05:23)Your rsize and wsize looks way too big. I just use defaults and it runs fine.
I don't know what your server is but I plucked this from BSD Magazine.
There is one point worth mentioning here, modern Linux usually uses wsize and rsize 8192 by default and that can cause problems with BSD servers as many support only wsize and rsize 1024. I suggest you add the option -o wsize=1024,rsize=1024 when you mount the share on your Linux machines.
You also might want to check here for some optimisations http://www.linuxselfhelp.com/howtos/NFS … WTO-4.html
A trick to increase NFS write performance is to disable synchronous writes on the server. The NFS specification states that NFS write requests shall not be considered finished before the data written is on a non-volatile medium (normally the disk). This restricts the write performance somewhat, asynchronous writes will speed NFS writes up. The Linux nfsd has never done synchronous writes since the Linux file system implementation does not lend itself to this, but on non-Linux servers you can increase the performance this way with this in your exports file:
Last edited by sand_man (2010-03-03 00:23:23) -
My Itunes was not working so I deleted it and now cannot download a new version. The download gets stuck at Gathering Required Information, Status: Performing Installation Checks. I am trying to download 10.4 for windows. I have been trying to download this again for about a week and have deleted all apple products from my Control Panel and temp folders. Any advice?
Cheers
JoDeleting the .itl and rebuilding it will lose date added information. Playlists may be recovered if the .xml file is imported. A better way might have been to temporarily rename the .itl while iTunes is installed, then restore the original .itl after the fact.
For general advice on problems installing iTunes see Troubleshooting issues with iTunes for Windows updates.
The steps in the second box are a guide to removing everything related to iTunes and then rebuilding it which is often a good starting point unless the symptoms indicate a more specific approach. Review the other boxes and the list of support documents further down page in case one of them applies.
Your library should be unaffected by these steps but there is backup and recovery advice elsewhere in the user tip.
tt2 -
Help needed in SQL performance - Using CASE in SQL statement versus 2 query
Hi,
I have a requirement to find count from a bunch of tables.
The SQL I have gives the count of all members.
I have created 2 queries to find count of active and inactive members.
The key difference is only the active dates.
Each query takes 20 seconds to execute.
I modified the SQL to use CASE statement in the SELECT.
So after the data is fetched the CASE statement will evaluate the active date and gives 2 counts (active and inactive)
Is it advisable to use this approach. Will CASE improve SQL performance ? I have to justify this.
Please let me know your thoughts.
Thanks,
JHi,
If it can be done in single SQL do it in single SQL.
You said:
Will CASE improve SQL performance There can be both cases to prove if the performance is better or worse.
In your case you should tell us how it is.
Regards,
Bhushan -
Authorizations for which transactions are required in BW?
Hi,
Can any ony please give some information regarding
Authorizations for which transactions are required in BW Production Support?
Regards,
AryanHi Aryan,
Authorizations for the following transactions are required in BW
1. RSA1
2. SM37
3. ST22
4. ST04
5. SE38
6. SE37
7. SM12
8. RSKC
9. SM51
10. RSRV
11.RSPC
13.RSMON
The Process Chain Maintenance (transaction RSPC) is used to define, change and view process chains.
Upload Monitor (transaction RSMO or RSRQ (if the request is known)
The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance
The OS Monitor (transaction ST06) gives you an overview on the current CPU, memory, I/O and network load on an application server instance.
The database monitor (transaction ST04) checks important performance indicators in the database, such as database size, database buffer quality and database indices.
The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data.
The ABAP runtime analysis (transaction SE30)
The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The Export/Import Shared buffer determines the cache size; it should be at least 40MB.
****Assign Points If Helpful****
Regards,
Ravikanth -
Which case we go for abap interface?
Hi friends,
As per my understand some scenario's are running with out PI integration.........those are called abap interfaces....
then which case we go for Pi? and which case we go for ABAP interface?
please help me on this...
Thanks & Regards
E.Ravi Chandra ReddyHi Ravi,
Understand this way,PI is not only for mapping.It is also used for doing the routing at the runtime.Suppose if you have A coming in the payload it should go to receiver X otherwise Y.
So,depends as per your business requirement.If you just want to integrate 2 system directly.You can go with that but suppose in future you have number of system,you cant do 1:1 connection everytime.At that time you require some middleware to integrate your scenario.
Regards,
Abhi -
Which ports are required by ODMA?
I am an SAP R/3 administrator.
I want to upgrade from Oracle 8.0.6.3 to 8.1.7.3 under AIX 4.3.3
So I need to use the RUNINSTALLER followed by the ODMA
I want to do this remotely, passing out through one firewall then in through another.
My X-emulation is ReflectionX, for which I already have firewall transparency for port 6000 .
My question: Which ports are required for RUNINSTALLER + ODMA
Thanks in advance,
KeithHi Keith!
This forum deals with issues specifically related to the Oracle Migration Workbench product. This product is used to migrate non Oracle databases to Oracle8i & Orace9i. (It does not enable you to migrate from one Oracle database version to another!)
In order to get assistance in performing a migration frrm one Oracle database version to another you should contact Oracle RDBMS Support. You should be able to find their contact details in the Documentation that comes with ODMA.
hope this helps
- Garry -
Expense Reports Which Do Not Require ReceiptS, Request AP to Review
Expense Reports Which Do Not Require Receipts And Have Justification Entered Request AP To Review
Expense Reports Which Do Not Require Receipts And Have Justification Entered Request AP To Review
We am getting an error in the workflow for an five expense report.
The workflow is stopped at "Request AP To Review For Spending Policy Compliance"
I have looked in metalink for this particular scenario. And have come up with Note: 273784.1
In reading the note. Can not follow the fix.
If anyone has come across this issue. Please provide with an explanation how to resolve issueI found the answer and I am posting it here in case someone else needs it:
You need to run the Expense Report Export with the Debug Switch on (you can find this under the request's parameters). When the output is ready, just select "View Log..." and search for the "REJECT CODE"; it will let you know what went wrong with your expense report. In my case, the suppliers I was using had invalid liability accounts. As soon as I updated them, the invoices were generated.
Regards,
Astrid -
Which profile is required to set for 'on screen login'
hi all,
I have a requirement where we have to change a page of CRM which is not an OAF Page but a jSP page of OracleEBS.
Somebody told me that when you set 'On screen login' on u will get entire navigation on console.
Can somebody tell me which profile is needed to set on for that.
Thanks
Bhupendrai have faced a problem .One customized application(oracle forms 10g and report 10g ) is runnning from all the workstation
when we run the same application in case of oracle report ,
after getting the parameter form when clicking the button for run the report,it shows the log in sreen
Could you tell me what are possible reason for that?
Is it related to jinititor? -
Improving redo log writer performance
I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive WriterSome comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us. -
Which cases we need to restart the J2EE Engine
Hi,
In our project , we have only two servers
One for Development & Testing
Second for Production.
Because of the Development and testing happening in the same mechine, we are requesting the client to get another server, Because we are getting problem to use the same server for develpment and testing .
when ever we make any changes at the bapi side, to reflect those changes in the front end or webdynpro, we need to restart, if we make any changes in the, Config Tool or Visual Admin, we need to restart.
At the same time couple users are working in the same server for tesing. It is troubling them when ever we make the restart.
So we are getting lot of complaits when ever we restart the server.
so we decided to request new server for development seperately. so , one of our administrators asking for proofs from the SAP coumentation, why do we need restart many times.
To get the new server for development, we need to show documentation from the sap for resons of restart. i have got some documentation for restart when it requires.
If any of you have some links related to this restart requires done on which cases?
can u please forward it me.
Regards
VijayHi Vijay,
Find the official documentation for J2EE engine in the below link
<a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/1a/819d42449b0731e10000000a1550b0/content.htm">http://help.sap.com/saphelp_nw2004s/helpdata/en/1a/819d42449b0731e10000000a1550b0/content.htm</a>
Regards,
Sudeep -
Hello ,
we are currently experiencing heavy I/O problmes perfoming prrof of concept
testig for one of our customers. Our setup is as follows:
HP ProLiant DL380 with 24GB Ram and 8 15k 72GB SAS drives
An HP P400 Raid controller with 256MB cache in RAID0 mode was used.
Win 2k8r2 was installed on c (a physical Drive) and the database on E
(= two physical drives in RAID0 128k Strip Size)
With the remaining 5 drives read and write tests were performed using raid 0 with variing number of drives.
I/O performance, as measured with ATTO Disk benchmark, increased as expected linear with the number of drives used.
We expected to see this increased performance in the database, too and performed the following tests:
- with 3 different tables the full table scan (FTS) (Hint: /*+ FULL (s) NOCACHE (s) */)
- a CTAS statement.
The system was used exclusively for testing.
The used tables:
Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
420MB/s was achieved.
In the write test on the other hand we were not able to archieve any improvement.
The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
Is this maybe just an incorrectly set parameter?
What we already tried:
- change the number of db_writer_processes 4 and then to 8
- Manual configuration of PGA and SGA size
- setting DB_BLOCK_SIZE to 16k
- FILESYSTEMIO_OPTIONS set to setall
- checking that Resource Manager are really disabled
Thanks for any help.
V$PARAMETERS
1 lock_name_space
2 processes 150
3 sessions 248
4 timed_statistics TRUE
5 timed_os_statistics 0
6 resource_limit FALSE
7 license_max_sessions 0
8 license_sessions_warning 0
9 cpu_count 8
10 instance_groups
11 event
12 sga_max_size 14495514624
13 use_large_pages TRUE
14 pre_page_sga FALSE
15 shared_memory_address 0
16 hi_shared_memory_address 0
17 use_indirect_data_buffers FALSE
18 lock_sga FALSE
19 processor_group_name
20 shared_pool_size 0
21 large_pool_size 0
22 java_pool_size 0
23 streams_pool_size 0
24 shared_pool_reserved_size 93952409
25 java_soft_sessionspace_limit 0
26 java_max_sessionspace_size 0
27 spfile C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILEORATEST.ORA
28 instance_type RDBMS
29 nls_language AMERICAN
30 nls_territory AMERICA
31 nls_sort
32 nls_date_language
33 nls_date_format
34 nls_currency
35 nls_numeric_characters
36 nls_iso_currency
37 nls_calendar
38 nls_time_format
39 nls_timestamp_format
40 nls_time_tz_format
41 nls_timestamp_tz_format
42 nls_dual_currency
43 nls_comp BINARY
44 nls_length_semantics BYTE
45 nls_nchar_conv_excp FALSE
46 fileio_network_adapters
47 filesystemio_options
48 clonedb FALSE
49 disk_asynch_io TRUE
50 tape_asynch_io TRUE
51 dbwr_io_slaves 0
52 backup_tape_io_slaves FALSE
53 resource_manager_cpu_allocation 8
54 resource_manager_plan
55 cluster_interconnects
56 file_mapping FALSE
57 gcs_server_processes 0
58 active_instance_count
59 sga_target 14495514624
60 memory_target 0
61 memory_max_target 0
62 control_files E:\ORACLE\ORADATA\ORATEST\CONTROL01.CTL, C:\ORACLE\FAST_RECOVERY_AREA\ORATEST\CONTROL02.CTL
63 db_file_name_convert
64 log_file_name_convert
65 control_file_record_keep_time 7
66 db_block_buffers 0
67 db_block_checksum TYPICAL
68 db_ultra_safe OFF
69 db_block_size 8192
70 db_cache_size 0
71 db_2k_cache_size 0
72 db_4k_cache_size 0
73 db_8k_cache_size 0
74 db_16k_cache_size 0
75 db_32k_cache_size 0
76 db_keep_cache_size 0
77 db_recycle_cache_size 0
78 db_writer_processes 1
79 buffer_pool_keep
80 buffer_pool_recycle
81 db_flash_cache_file
82 db_flash_cache_size 0
83 db_cache_advice ON
84 compatible 11.2.0.0.0
85 log_archive_dest_1
86 log_archive_dest_2
87 log_archive_dest_3
88 log_archive_dest_4
89 log_archive_dest_5
90 log_archive_dest_6
91 log_archive_dest_7
92 log_archive_dest_8
93 log_archive_dest_9
94 log_archive_dest_10
95 log_archive_dest_11
96 log_archive_dest_12
97 log_archive_dest_13
98 log_archive_dest_14
99 log_archive_dest_15
100 log_archive_dest_16
101 log_archive_dest_17
102 log_archive_dest_18
103 log_archive_dest_19
104 log_archive_dest_20
105 log_archive_dest_21
106 log_archive_dest_22
107 log_archive_dest_23
108 log_archive_dest_24
109 log_archive_dest_25
110 log_archive_dest_26
111 log_archive_dest_27
112 log_archive_dest_28
113 log_archive_dest_29
114 log_archive_dest_30
115 log_archive_dest_31
116 log_archive_dest_state_1 enable
117 log_archive_dest_state_2 enable
118 log_archive_dest_state_3 enable
119 log_archive_dest_state_4 enable
120 log_archive_dest_state_5 enable
121 log_archive_dest_state_6 enable
122 log_archive_dest_state_7 enable
123 log_archive_dest_state_8 enable
124 log_archive_dest_state_9 enable
125 log_archive_dest_state_10 enable
126 log_archive_dest_state_11 enable
127 log_archive_dest_state_12 enable
128 log_archive_dest_state_13 enable
129 log_archive_dest_state_14 enable
130 log_archive_dest_state_15 enable
131 log_archive_dest_state_16 enable
132 log_archive_dest_state_17 enable
133 log_archive_dest_state_18 enable
134 log_archive_dest_state_19 enable
135 log_archive_dest_state_20 enable
136 log_archive_dest_state_21 enable
137 log_archive_dest_state_22 enable
138 log_archive_dest_state_23 enable
139 log_archive_dest_state_24 enable
140 log_archive_dest_state_25 enable
141 log_archive_dest_state_26 enable
142 log_archive_dest_state_27 enable
143 log_archive_dest_state_28 enable
144 log_archive_dest_state_29 enable
145 log_archive_dest_state_30 enable
146 log_archive_dest_state_31 enable
147 log_archive_start FALSE
148 log_archive_dest
149 log_archive_duplex_dest
150 log_archive_min_succeed_dest 1
151 standby_archive_dest %ORACLE_HOME%\RDBMS
152 fal_client
153 fal_server
154 log_archive_trace 0
155 log_archive_config
156 log_archive_local_first TRUE
157 log_archive_format ARC%S_%R.%T
158 redo_transport_user
159 log_archive_max_processes 4
160 log_buffer 32546816
161 log_checkpoint_interval 0
162 log_checkpoint_timeout 1800
163 archive_lag_target 0
164 db_files 200
165 db_file_multiblock_read_count 128
166 read_only_open_delayed FALSE
167 cluster_database FALSE
168 parallel_server FALSE
169 parallel_server_instances 1
170 cluster_database_instances 1
171 db_create_file_dest
172 db_create_online_log_dest_1
173 db_create_online_log_dest_2
174 db_create_online_log_dest_3
175 db_create_online_log_dest_4
176 db_create_online_log_dest_5
177 db_recovery_file_dest c:\oracle\fast_recovery_area
178 db_recovery_file_dest_size 4322230272
179 standby_file_management MANUAL
180 db_unrecoverable_scn_tracking TRUE
181 thread 0
182 fast_start_io_target 0
183 fast_start_mttr_target 0
184 log_checkpoints_to_alert FALSE
185 db_lost_write_protect NONE
186 recovery_parallelism 0
187 db_flashback_retention_target 1440
188 dml_locks 1088
189 replication_dependency_tracking TRUE
190 transactions 272
191 transactions_per_rollback_segment 5
192 rollback_segments
193 undo_management AUTO
194 undo_tablespace UNDOTBS1
195 undo_retention 900
196 fast_start_parallel_rollback LOW
197 resumable_timeout 0
198 instance_number 0
199 db_block_checking FALSE
200 recyclebin on
201 db_securefile PERMITTED
202 create_stored_outlines
203 serial_reuse disable
204 ldap_directory_access NONE
205 ldap_directory_sysauth no
206 os_roles FALSE
207 rdbms_server_dn
208 max_enabled_roles 150
209 remote_os_authent FALSE
210 remote_os_roles FALSE
211 sec_case_sensitive_logon TRUE
212 O7_DICTIONARY_ACCESSIBILITY FALSE
213 remote_login_passwordfile EXCLUSIVE
214 license_max_users 0
215 audit_sys_operations FALSE
216 global_context_pool_size
217 db_domain
218 global_names FALSE
219 distributed_lock_timeout 60
220 commit_point_strength 1
221 global_txn_processes 1
222 instance_name oratest
223 service_names ORATEST
224 dispatchers (PROTOCOL=TCP) (SERVICE=ORATESTXDB)
225 shared_servers 1
226 max_shared_servers
227 max_dispatchers
228 circuits
229 shared_server_sessions
230 local_listener
231 remote_listener
232 listener_networks
233 cursor_space_for_time FALSE
234 session_cached_cursors 50
235 remote_dependencies_mode TIMESTAMP
236 utl_file_dir
237 smtp_out_server
238 plsql_v2_compatibility FALSE
239 plsql_warnings DISABLE:ALL
240 plsql_code_type INTERPRETED
241 plsql_debug FALSE
242 plsql_optimize_level 2
243 plsql_ccflags
244 plscope_settings identifiers:none
245 permit_92_wrap_format TRUE
246 java_jit_enabled TRUE
247 job_queue_processes 1000
248 parallel_min_percent 0
249 create_bitmap_area_size 8388608
250 bitmap_merge_area_size 1048576
251 cursor_sharing EXACT
252 result_cache_mode MANUAL
253 parallel_min_servers 0
254 parallel_max_servers 135
255 parallel_instance_group
256 parallel_execution_message_size 16384
257 hash_area_size 131072
258 result_cache_max_size 72482816
259 result_cache_max_result 5
260 result_cache_remote_expiration 0
261 audit_file_dest C:\ORACLE\ADMIN\ORATEST\ADUMP
262 shadow_core_dump none
263 background_core_dump partial
264 background_dump_dest c:\oracle\diag\rdbms\oratest\oratest\trace
265 user_dump_dest c:\oracle\diag\rdbms\oratest\oratest\trace
266 core_dump_dest c:\oracle\diag\rdbms\oratest\oratest\cdump
267 object_cache_optimal_size 102400
268 object_cache_max_size_percent 10
269 session_max_open_files 10
270 open_links 4
271 open_links_per_instance 4
272 commit_write
273 commit_wait
274 commit_logging
275 optimizer_features_enable 11.2.0.3
276 fixed_date
277 audit_trail DB
278 sort_area_size 65536
279 sort_area_retained_size 0
280 cell_offload_processing TRUE
281 cell_offload_decryption TRUE
282 cell_offload_parameters
283 cell_offload_compaction ADAPTIVE
284 cell_offload_plan_display AUTO
285 db_name ORATEST
286 db_unique_name ORATEST
287 open_cursors 300
288 ifile
289 sql_trace FALSE
290 os_authent_prefix OPS$
291 optimizer_mode ALL_ROWS
292 sql92_security FALSE
293 blank_trimming FALSE
294 star_transformation_enabled TRUE
295 parallel_degree_policy MANUAL
296 parallel_adaptive_multi_user TRUE
297 parallel_threads_per_cpu 2
298 parallel_automatic_tuning FALSE
299 parallel_io_cap_enabled FALSE
300 optimizer_index_cost_adj 100
301 optimizer_index_caching 0
302 query_rewrite_enabled TRUE
303 query_rewrite_integrity enforced
304 pga_aggregate_target 4831838208
305 workarea_size_policy AUTO
306 optimizer_dynamic_sampling 2
307 statistics_level TYPICAL
308 cursor_bind_capture_destination memory+disk
309 skip_unusable_indexes TRUE
310 optimizer_secure_view_merging TRUE
311 ddl_lock_timeout 0
312 deferred_segment_creation TRUE
313 optimizer_use_pending_statistics FALSE
314 optimizer_capture_sql_plan_baselines FALSE
315 optimizer_use_sql_plan_baselines TRUE
316 parallel_min_time_threshold AUTO
317 parallel_degree_limit CPU
318 parallel_force_local FALSE
319 optimizer_use_invisible_indexes FALSE
320 dst_upgrade_insert_conv TRUE
321 parallel_servers_target 128
322 sec_protocol_error_trace_action TRACE
323 sec_protocol_error_further_action CONTINUE
324 sec_max_failed_login_attempts 10
325 sec_return_server_release_banner FALSE
326 enable_ddl_logging FALSE
327 client_result_cache_size 0
328 client_result_cache_lag 3000
329 aq_tm_processes 1
330 hs_autoregister TRUE
331 xml_db_events enable
332 dg_broker_start FALSE
333 dg_broker_config_file1 C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR1ORATEST.DAT
334 dg_broker_config_file2 C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\DR2ORATEST.DAT
335 olap_page_pool_size 0
336 asm_diskstring
337 asm_preferred_read_failure_groups
338 asm_diskgroups
339 asm_power_limit 1
340 control_management_pack_access DIAGNOSTIC+TUNING
341 awr_snapshot_time_offset 0
342 sqltune_category DEFAULT
343 diagnostic_dest C:\ORACLE
344 tracefile_identifier
345 max_dump_file_size unlimited
346 trace_enabled TRUE961262 wrote:
The used tables:
Table 1: 312 col, 12,248 MB, 11,138,561 rows, avg len 621 bytes
Table 2: 159 col, 4288 MB, 5,441,171 rows, avg len 529 bytes
Table 3: 118 col, 360MB, 820,259 rows, avg len 266 bytes
The FTS has improved as expected. With 5 physical drives in a RAID0, a performance of
420MB/s was achieved.
In the write test on the other hand we were not able to archieve any improvement.
The CTAS statement always works with about 5000 - 6000 BLOCK/s (80MB/s)
But when we tried running several CTAS statements in different sessions, the overall speed increased as expected.
Further tests showed that the write speed seems to depend also on the number of columns. 80MB/s were only
possible with Tables 2 and 3. With Table 1, however only 30MB/s were measured.
If multiple CTAS can produce higher throughput on writes this tells you that it is the production of the data that is the limit, not the writing. Notice in your example that nearly 75% of the time of the CTAS as CPU, not I/O.
The thing about number of columns is that table 1 has exceeded the critical 254 limit - this means Oracle has chained all the rows internally into two pieces; this introduces lots of extra CPU-intensive operations (consistent gets, table access by rowid, heap block compress) so that the CPU time could have gone up significantly, resulting in a lower throughput that you are interpreting as a write problem.
One other thought - if you are currently doing CTAS by "create as select from {real SAP table}" there may be other side effects that you're not going to see. I would do "create test clone of real SAP table", then "create as select from clone" to try and eliminate any such anomalies.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
What are the key requirements to write a recursive cte?
what are the key requirements to write a recursive cte?
when we will go for a recursive cte.A common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an
object and lasts only for the duration of the query. Unlike a derived table, a CTE can be self-referencing and can be referenced multiple times in the same query.
A CTE can be used to:
Create a recursive query. For more information, see https://technet.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx
Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
Enable grouping by a column that is derived from a scalar subselect, or a function that is either not deterministic or has external access.
Reference the resulting table multiple times in the same statement.
Source : https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx -
Looking for a pdf reader which meets these requirements
I am looking for a pdf reader which meets these requirements:
1- Open pdf starting from last page opened during last session
2- Be able to scroll between pages vertically (not horizontally like many current ones do). Vertical scroll should be a continuous smooth scroll without doing a sudden full page pull in. Should behave as if pdf is one single very long page.
3- Should be able to change width of page and lock it. Page should not move sideways. Page width should be remembered so when same pdf is opened, it opens in that width.
4- A fast scroll feature. Useful for pdf's with hundreds of pages.
5- Nice to have feature: when opening the pdf reader, it automatically opens the last pdf file on the last page read.
I have tried all the free pdf readers in the AppStore and none met all these requirements except for two which had these issues:
1- iRead limitation: When screen is touched for more than a second, the page is frozen (locked) and page can't be scrolled. Scrolling is done when doing quick swipes only
2- FileApp limitation: Does not remember last page opened. Does not remember last width set. Fast scroll doesn't work properly when width is other than default.iAnnotate additional points
iAnnotate works with DropBox to download and upload edited PDFs. It does all the other same things, email, USB, and iTunes etc. but it also can download PDFs from any web site.
You can transfer hundreds if not thousands of files at one whack using the AIJ Utility on your desktop.
If you transfer a large number of files you have to plan not to use iAnnotate a while as it has an index function that indexes all the text into a master 'dictionary' so it can do searches for data and find PDFs for you. This indexing takes hours if you transfer hundreds of moderate size PDFs at one time.
The biggest PDF I have feed it was a Gimp manual at close to a thousand pages.
Remember the scratch pad ram is only 256 MB in the current iPad. You can cash iAnnotate if you do something really dumb with such a large file. Other applications also grab and hold onto chunks of this ram in the iPad so it is best to force a memory reset before doing anything that is going to max out that ram.
iAnnotate allows you to have more than one PDF open at a time and you can tab between all the open PDFs in a blink just as you would in a tabbed web browser.
If you zoom a page larger than the width of the screen it slides around. Less than the width of the screen the page locks in the size while scrolling which is smooth between pages.
iAnnotate is a very well made product for dealing with PDFs. Annotations display in Goodreader and in the Mac OS Preview.
Maybe you are looking for
-
Sorry if posting this topic here offends anyone but I was at a bit of a loss just where to post this query. It relates to cleaning the frequent smudges I seem to accumulate on my LCD screen. I checked at my local Apple shop who wanted me to pay an ex
-
What i use to develop complex tree and table in my web application?
hi guys simply most of my work in my web application will be depend on: 1-tree 2-table i need this tree and table deal with database table (or even tables) must be able to do every thing like add node/row/column, delete, update, drag-n-drop, various
-
Why are many of my files in Keynote, Pages and Numbers not opening?
Why are many files in Keynote, Pages and Numbers not opening? I have re-downloaded the apps but with no joy!
-
if i unjar a jar file, how do i let java know where the classes are? thanks for any help owen
-
hi there, can anyone tell me how can i send files from client to server in a console application using sockets thank you