Does buffer cache size matters during imp process ?
Hi,
sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
As far as I know insert is done via pga area (direct insert) .
Please clarify for me .
DB is 10.2.0.3 if that matters :).
Regards.
Greg
Surprising result: I tried closing the db handles with DB_NOSYNC and performance
got worse. Using a 32 Meg cache, it took about twice as long to run my test:
15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
Here is some data from db_stat -m when using DB_NOSYNC:
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10811882)
44864 Pages created in the cache
10M Pages read into the cache (10798480)
7380761 Pages written from the cache to the backing file
3452500 Clean pages forced from the cache
7380761 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
5001 Current clean page count
5011 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47428268)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118169805)
It looks like not flushing the cache regularly is forcing a lot more
dirty pages (and fewer clean pages) from the cache. Forcing a
dirty page out is slower than forcing a clean page out, of course.
Is this result reasonable?
I suppose I could try to sync less often than I have been, but more often
than never to see if that makes any difference.
When I close or sync one db handle, I assume it flushes only that portion
of the dbenv's cache, not the entire cache, right? Is there an API I can
call that would sync the entire dbenv cache (besides closing the dbenv)?
Are there any other suggestions?
Thanks,
Eric
Similar Messages
-
Swapping and Database Buffer Cache size
I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.
Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
However ... as the buffer cache grows, the time to determine 'which blocks
need to be cleaned' increases. Therefore, at a certain point the benefit of a
larger cache is offset by the time to keep it sync'd to the disk. After that point,
increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
A checkpoint performs the following three operations:
1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.
The following events trigger a checkpoint.
1. Redo log switch
2. LOG_CHECKPOINT_TIMEOUT has expired
3. LOG_CHECKPOINT_INTERVAL has been reached
4. DBA requires so (alter system checkpoint) -
Hello,
I'd like to get the type of information in version 8 that I can get in version 9 through v$db_cache_advice in order to determine the size that the buffer cache should be. I've found sites that say you can set db_block_lru_extended_statistics to populate v$recent_bucket, but they say there is a performance hit. Can anyone tell me qualitatively how much of a performance hit this causes (obviously it would only be run this way for a short period of time), and whether or not this is really the best/right way to do this?
Thanks.Actually ours is bank Database,
Our Database size is 400GB.
last month they got ORA-000604 error,
so that production database got hanged for 15 min, issue got resolved automatically after 15min.
At that time complete buffer cache was flushed out & all oracle Processes was terminated.
becoz of that they increased buffer cache size. -
I need to calculate buffer cache size calculation for get operation.
SELECT o.object_name, h.status, count(*) number_of_blockes
FROM V$BH h, DBA_OBJECTS o WHERE h.objd=o.data_object_id
AND o.owner NOT IN('SYS','SYSTEM','SYSMAN')
AND h.status NOT IN('free')
GROUP BY o.object_name,h.status
ORDER BY count(*) DESC;
Used the above query, so i got the number of blocks used to cache data.
I performed a get operation in one db and number of blocks noticed.
But the problem is same operation in another db shows different number of blocks.
Both db are same configuration.
Anyone notices this issue??Why do you expect them to be the same?
Oracle version of each database?
Number of objects in each database?
Size of buffer cache in each database?
The amount of query activity that would actually load blocks into the buffer cache in each database is not likely to be "the same".
Identical data can take up a different number of blocks in different databases, depending on how it was loaded, transactions on that data, etc, so the number of blocks used in the buffer cache is likely to be different in different databases, even for the same data set. -
Suggest buffer cache size check
Hi experts,
please suggest how much give size of buffer cache. please tell me how to calculate this.
Note: on database running huge select with where clause.
>
SQL> show sga
Total System Global Area 536870912 bytes
Fixed Size 1220408 bytes
Variable Size 117440712 bytes
Database Buffers 411041792 bytes
Redo Buffers 7168000 bytes
>
>
SGA_ADVISORE
SQL> column c1 heading 'Cache Size (m)' format 999,999,999,999
SQL> column c2 heading 'Buffers' format 999,999,999
SQL> column c3 heading 'Estd Phys|Read Factor' format 999.90
SQL> column c4 heading 'Estd Phys| Reads' format 999,999,999,999
SQL>
SQL> select
2 size_for_estimate c1,
3 buffers_for_estimate c2,
estd_physical_read_factor c3,
4 5 estd_physical_reads c4
6 from
7 v$db_cache_advice
8 where
9 name = 'DEFAULT'
10 and
11 block_size = (SELECT value FROM V$PARAMETER
12 WHERE name = 'db_block_size')
and
13 14 advice_status = 'ON';
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
36 4,491 1.02 1,768,088,631
72 8,982 1.01 1,751,858,036
108 13,473 1.01 1,745,807,886
144 17,964 1.00 1,742,684,545
180 22,455 1.00 1,740,606,287
216 26,946 1.00 1,739,127,030
252 31,437 1.00 1,737,935,545
288 35,928 1.00 1,736,936,513
324 40,419 1.00 1,736,098,119
360 44,910 1.00 1,735,368,624
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
392 48,902 1.00 1,734,775,608
396 49,401 1.00 1,734,701,493
432 53,892 1.00 1,734,086,804
468 58,383 1.00 1,733,466,505
504 62,874 1.00 1,732,871,083
540 67,365 1.00 1,732,300,725
576 71,856 1.00 1,731,737,930
612 76,347 1.00 1,731,204,779
648 80,838 1.00 1,730,669,455
684 85,329 1.00 1,730,117,349
Estd Phys Estd Phys
Cache Size (m) Buffers Read Factor Reads
720 89,820 .98 1,703,583,925
21 rows selected.
Dictionary Cache Hit Ratio : 99.92% Value Acceptable.
Library Cache Hit Ratio : 98.22% Increase SHARED_POOL_SIZE parameter to bring value above 99%
DB Block Buffer Cache Hit Ratio : 60.53% Increase DB_BLOCK_BUFFERS parameter to bring value above 89%
Latch Hit Ratio : 99.72% Value acceptable.
Disk Sort Ratio : 0.00% Value Acceptable.
Rollback Segment Waits : 0.00% Value acceptable.
Dispatcher Workload : 0.00% Value acceptable.
>
Edited by: 928992 on Oct 18, 2012 2:31 PM
Edited by: 928992 on Oct 18, 2012 3:04 PMI am displaying you mine test db's buffer cache size : (11.2.0.1 on Windows box)
SQL> show parameter db_cache_size;
NAME TYPE VALUE
db_cache_size big integer 0
SQL> select name, current_size, buffers, prev_size, prev_buffers from v$buffer_pool;
NAME CURRENT_SIZE BUFFERS PREV_SIZE PREV_BUFFERS
DEFAULT 640 78800 0 0
SQL> select name,bytes from v$sgainfo where name='Buffer Cache Size';
NAME BYTES
Buffer Cache Size *671088640*
SQL> show sga;
Total System Global Area 1603411968 bytes
Fixed Size 2176168 bytes
Variable Size 922749784 bytes
*Database Buffers 671088640 bytes*
Redo Buffers 7397376 bytes
SQL> select * from v$sga;
NAME VALUE
Fixed Size 2176168
Variable Size 922749784
*Database Buffers 671088640*
Redo Buffers 7397376
SQL> show parameter sga_target;
NAME TYPE VALUE
sga_target big integer 0
SQL>Regards
Girish Sharma
Edited by: Girish Sharma on Oct 18, 2012 2:51 PM
Oracle and OS Info added. -
Hi there,
Can anyone explain these results on a 10.2.0.1 database?
SQL> select name, bytes/1024/1024 from v$sgainfo where name='Buffer Cache Size';
NAME BYTES/1024/1024
Buffer Cache Size 1728
SQL> show parameter db_cache_size
NAME TYPE VALUE
db_cache_size big integer 768M
NAME TYPE VALUE
sga_target big integer 0
As you can see AMM is disabled (sga_target=0), however v$sgainfo and dba_hist_sga record always a buffer cache size of 1728 MB and db_cache_size is set to 768 MB.
How can it be?
Many thanks.I answer myself:
db_8k_cache_size big integer 160M
db_cache_size big integer 768M
db_keep_cache_size big integer 400M
db_recycle_cache_size big integer 400M
I am like a newbee... -
How to reduce max buffer/cache size?
Hi,
every time I copy a file which is bigger or similiar in size to my total RAM (4gb) I notice very low responsibility from firefox (which is totally unresponsive, can't switch tabs or scroll for 30-60s). Of course my free memory is very low (something like 50-100mb) and I notice some swap usage. AFAIK linux caches everthing that is being copied, but in case of such big files it seems unnecessary.
Is there a way to reduce max buffer size?
I know that buffering is good in general, but I get a feeling that firefox is giving up ram and he has to read everything again from disk which slows him down. I always have many tabs open, so often it has around 30% of memory.
I searched many times on how to reduce buffer sizes, but I've always found only articles with "buffering is always good and never an issue" attitude.
I would be very happy to hear any suggestrions,
cheers,
kajmanThis seems a popular problem, going back years. The default Linux setup is bad for responsiveness, it seems.
Here's the summary of what I do:
Firstly, install a BFS-patched kernel, for a better kernel scheduler, and also so that the ionice and schedtool commands will work. Bonus points for switching to BFQ while you're at it - or stick with CFQ, which also supports ionice.
In /etc/fstab, use commit=60 rather than default of 5 seconds, and also noatime, e.g.:
UUID=73d55f23-fb9d-4a36-bb25-blahblah / ext4 defaults,noatime,nobarrier,commit=60 1 1
In /etc/sysctl.conf
# From http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that
vm.swappiness=0
# https://lwn.net/Articles/572921/
vm.dirty_background_bytes=16777216
vm.dirty_bytes=50331648
In ~/.bashrc - see post, e.g.:
alias verynice="ionice -c3 nice -n 15"
In /etc/security/limits.d/ - see post. Read CK's excellent blog article, for info.
In your cp command, add the word verynice to the start, to stop the large batch copy from having the same priority as your UI.
Compile sqlite without fsync, to make e.g. firefox smoother.
Potentially use threadirqs to prioritize the interrupt-handling.
Edit: Updated vm.swappiness from 0 to 10, from CK's blog.
Edit2: Also see patch and e.g. nr_requests in thread.
Edit3: Using nice instead of schedtool - not sure whether schedtool can hog the CPU.
Edit4: Added threadirqs.
Edit5: Tweaked sysctl.conf settings.
Edit6: Added nobarrier option to mount, and sqlite's fsync.
Edit7: Removed swap comment - I do use a swapfile, these days, mainly because firefox needs so much virtual RAM to compile.
Last edited by brebs (2014-03-10 09:51:34) -
Nothing is wrong really, everything's running running just fine. I just can't find info at all on these 2 settings anywhere.
(I'm a capable but not advanced computer user)They were added as part of this bug:
*[https://bugzilla.mozilla.org/show_bug.cgi?id=545869 Bug 545869] – Remove small buffer #defines and use preferences. -
Will I increase my Buffer Cache ?
Oracle 9i
Shared Pool 2112 Mb
Buffer Cache 1728 Mb
Large Pool 32Mb
Java Pool 32 Mb
Total 3907.358 Mb
SGA Max Size 17011.494 Mb
PGA
Aggregate PGA Target 2450 Mb
Current PGA Allocated 3286059 KB
Maximum PGA Allocated (since Startup) 3462747 KB
Cache Hit Percentage 98.71%
The Buffer Cache Size advise is telling me that if I increase the Buffer Cache to 1930Mb i will get a 8.83 decrease in phyiscal reads (And its get better the more I increase it)
The question is .. can I safely increase it (In light of my current memory allocations) ? Is it worth it .. ?Two things stand out:
Your sga max size is 17Gb, but you are only using about 4Gb of it - so you seem to have 13Gb that you are not making best use of.
Your pga aggregate target is 2.4Gb, but you've already hit a peak of 3.4Gb - which means your target may be too small - so it's lucky you had all that spare memory which hadn't gone into the SGA. Despite the availability of memory, some of your queries may have been rationed at run-time to try to minimise the excess demand.
Is this OLTP or DSS - where do you really need the memory ? (Have a look in v$process to see the pga usage on a process by process level).
How many processes are allowed to connect to the database ? (You ought to allow about 2Mb - 4Mb per process to the pga_aggregate_target for OLTP) and at least 1Mb per process for the buffer cache.
Where do you see time lost ? time on disk I/O, or time on CPU ? What type of disk I/O, what's the nature of the CPU usage. These figures alone do not tell us what you should do with the spare memory you seem to have.
A simple response to your original question would be that you probably need to increase the pga_aggregate_target, and you might as well increase the buffer size since you seem to have the memory for both.
On the downside, changing the pga_aggregate_target could cause some execution plans to change; and changing the buffer size does change the limit size on a 'short' table, which can cause an increase in I/O as an unlucky side effect if you're a little heavy on "long" tablescans.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Hi,
I would appreciate some advice please.
My Oracle 10.2.0.4.0 database starts off in the day with a buffer cache hit ratio of almost 100% but this drops gradually in the course of the day as the system gets busier. Is this something I need to be concerned about if no performance problem has been reported by the users?
I would have thought this should be a normal situation, i.e. as the system gets busier, I would expect that less number of calls to the buffer cache will be satisfied because many more calls are using up the memory?
Please note that I've done some reading up on this but would like some suggestions from more experienced people than myself on what I should normally expect and what should or should not be a concern.
thanksuser8869798 wrote:
hi,
thanks again for your response and apologies for not being able to get back yesterday.No problem :) .
>
- What i am doing and how - it is the backend database r our Finance application, many users with various transactions.Okay , that sounds like a "normal" database with normal workload.
- configuring buffer cache size - I haven't done anything manually yet, it's all been as installed, this is what I'm trying to figure out whether it is something I should be looking into doing simply because of the dropping hit ratio and not because of any reported performance problem.If you don't have much of the knowledge, its the best to take use of the advisories which would tell you in a better and in a graphical way that whether you should or shouldn't be worries. Look at the view v$db_cache_advice which can suggest to you that whether you would need to tweak the buffer cache or not.
>
- looking at the queries - What is the easiest way of doing this in an environment where many users are running different queries? What's the easiest way to identify queries that we may need to have a closer look at?Easiest way? Well, let the users come back to you ;-) .
>
- Basically, I'm just trying to ascertain whether or not I need to be concerned that my hit ratio drops below 89% even though no performance problem has been reported. If it is something that I should look into, then what is the best way to go about it?
I believe that's answered by couple of us already, nope.
Aman.... -
Sql query executes faster 2. time despite clearing buffer cache/shared pool
Hi All,
I want to test query performance, so i clear cache in following way before each execution:
alter system flush buffer_cache;
alter system flush shared_pool;
But, first execution takes more time than second and consequent executions. For example first execution takes 30 seconds, consequent executions take 3 seconds.
Before each execution I clear the cache.
What can be the reason? I use TOAD for query execution. Does TOAD caches something after first execution?
And last question. Is there a dynamic query that query execution duration is stored?
Any help will be appreciated.
Thanks.>
So there shouldn't be problem from the parameter, can you post result from V$SGA_DYNAMIC_COMPONENTS and v$SGAINFO ?
V$SGA_DYNAMIC_COMPONENTS
COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE OPER_COUNT LAST_OPER_TYP LAST_OPER LAST_OPER GRANULE_SIZE
shared pool 436207616 402653184 0 0 125 GROW IMMEDIATE 02-JUL-09 16777216
large pool 201326592 117440512 0 83886080 12 SHRINK DEFERRED 02-JUL-09 16777216
java pool 16777216 16777216 0 0 0 STATIC 16777216
streams pool 16777216 16777216 0 0 0 STATIC 16777216
DEFAULT buffer cache 1895825408 1711276032 0 16777216 137 GROW DEFERRED 02-JUL-09 16777216
KEEP buffer cache 0 0 0 0 0 STATIC 16777216
RECYCLE buffer cache 0 0 0 0 0 STATIC 16777216
DEFAULT 2K buffer cache 0 0 0 0 0 STATIC 16777216
DEFAULT 4K buffer cache 0 0 0 0 0 STATIC 16777216
DEFAULT 8K buffer cache 0 0 0 0 0 STATIC 16777216
DEFAULT 16K buffer cache 0 0 0 0 0 STATIC 16777216
DEFAULT 32K buffer cache 0 0 0 0 0 STATIC 16777216
ASM Buffer Cache 0 0 0 16777216 0 STATIC 16777216
V$SGA_INFO
NAME BYTES RESIZEABL
Fixed SGA Size 2086392 No
Redo Buffers 14688256 No
Buffer Cache Size 1895825408 Yes
Shared Pool Size 436207616 Yes
Large Pool Size 201326592 Yes
Java Pool Size 16777216 Yes
Streams Pool Size 16777216 Yes
Granule Size 16777216 No
Maximum SGA Size 2634022912 No
Startup overhead in Shared Pool 218103808 No
Free SGA Memory Available 50331648 -
can anyone plz clarify my doubt,it is which parameter defines the size of db buffer cache in the SGA.does db_cache_size directly defines the size or is it the size of the cache of standard blocks(specified by db_block_size parameter).
DB_BLOCK_BUFFERS specifies the number of blocks to allocate for data buffer. This parameter's value is then multiplied with DB_BLOCK_SIZE to calculate the size of the data buffer.
DB_CACHE_SIZE specifies the size value itself directly in units of KV,MB or GB. This parameter alone is enough to calculate the data buffer cache size.
DB_BLOCK_BUFFERS can only create buffer cache in units of blocks based on one parameter DB_BLOCK_SIZE. On the other hand, multiple data buffere caches can be created by using DB_nK_CACHE_SIZE parameters where n is the blocks's size of for the buffer cache. So for example, one can allocate X MB buffer cache of 8K block size and have Y MB buffer cache of 16K blocks. This helps when you have tablespaces of varying block sizes (This could not be possible using DB_BLOCK_BUFFERS as DB_BLOCK_SIZE is not modifiable).
DB_CACHE_SIZE can work along with SGA_TARGET parameter (which decides SGA size). If DB_CACHE_SIZE is 0, then it's value varies based on usage. If a value is set then that value turns out to be a minimum value. This is not possible using DB_BLOCK_BUFFERS. -
ADO Recordset Cache Size Breaking SQL Reads
I've got a C++ application that uses ADO/ODBC to talk to various databases using SQL.
In an attempt to optimize performance, we modified the Cache Size parameter on the Recordset object from the default Cache Size of 1 to a slightly larger value. This has worked well for SQL Server and Access databases to increase the performance of our SQL reads.
However, talking to our Oracle 8i (8.1.6 version) database, adjusting the Cache Size causes lost records or lost fields.
We've tried the same operation using a VB application and get similar results, so it's not a C++ only problem.
For the VB app, changing the cursor-type from ForwardOnly to Dynamic does affect the problem, but neither work correctly. With a ForwardOnly cursor the string fields start coming back NULL after N+1 reads, where N is the Cache Size parameter. With a Dynamic cursor, whole records get dropped instead of just string fields: for example with a Cache Size of 5, the 2nd, 3rd, 4th and 5th records are not returned.
In our C++ application, the symptom is always lost string fields, regardless of these two cursor types.
I've tried updating the driver from 8.01.06.00 to the latest 8.01.66.00 (8.1.6.6) but this didn't help.
Is anybody familiar with this problem? know any workarounds?
Thanks
[email protected]I am displaying you mine test db's buffer cache size : (11.2.0.1 on Windows box)
SQL> show parameter db_cache_size;
NAME TYPE VALUE
db_cache_size big integer 0
SQL> select name, current_size, buffers, prev_size, prev_buffers from v$buffer_pool;
NAME CURRENT_SIZE BUFFERS PREV_SIZE PREV_BUFFERS
DEFAULT 640 78800 0 0
SQL> select name,bytes from v$sgainfo where name='Buffer Cache Size';
NAME BYTES
Buffer Cache Size *671088640*
SQL> show sga;
Total System Global Area 1603411968 bytes
Fixed Size 2176168 bytes
Variable Size 922749784 bytes
*Database Buffers 671088640 bytes*
Redo Buffers 7397376 bytes
SQL> select * from v$sga;
NAME VALUE
Fixed Size 2176168
Variable Size 922749784
*Database Buffers 671088640*
Redo Buffers 7397376
SQL> show parameter sga_target;
NAME TYPE VALUE
sga_target big integer 0
SQL>Regards
Girish Sharma
Edited by: Girish Sharma on Oct 18, 2012 2:51 PM
Oracle and OS Info added. -
Hello Oracle community!
I'm having issues because of my current SGA size on a Win 32-bit platform, its measures are normally on 6,811,549,696 bytes. There was another database administrator before me but he left the company, his instructions were that he setup this size because that time RAM memory was not the problem.
The OS' parameters are set to read the entire 8GB RAM.
If I set a SGA_TARGET to around 6GB, will I only have -800mb on my db and no problems at all? Is there any way to check things better than just set a SGA_TARGET and pray?
Thank you very much!@Elios
Oracle Version 10.2.0.3.0
It's not an OLTP Database.
@mehmet eser
NAME BYTES RESIZEABLE
Fixed SGA Size 1289604 No
Redo Buffers 7098368 No
Buffer Cache Size 6442450944 Yes
Shared Pool Size 343932928 Yes
Large Pool Size 8388608 Yes
Java Pool Size 8388608 Yes
Streams Pool Size 0 Yes
Granule Size 8388608 No
Maximum SGA Size 6811549696 No
Startup overhead in Shared Pool 268435456 No
Free SGA Memory Available 0
11 rows selected
NAME VALUE
Fixed Size 1289604
Variable Size 360710780
Database Buffers 6442450944
Redo Buffers 7098368
4 rows selected
To use more than the basic on 32bit Windows I used /PAE on boot.ini. -
Hello All,
While I was going through the oracle architecture, a doubt has occured to me and thought this would the right place to ask.
What happens when a user is trying to read a table ( full table scan ) say around 10gb in size from a DB with only 2 or 3 gb DB buffer cache size.
Your inputs will greatly help.
Thanks.user8710159 wrote:
Hello All,
While I was going through the oracle architecture, a doubt has occured to me and thought this would the right place to ask.
What happens when a user is trying to read a table ( full table scan ) say around 10gb in size from a DB with only 2 or 3 gb DB buffer cache size.
Oracle will return the requested result set.
Handle: user8710159
Status Level: Newbie (5)
Registered: Sep 15, 2009
Total Posts: 194
Total Questions: 77 (61 unresolved)
why so many unanswered questions?
Edited by: sb92075 on Mar 27, 2012 11:47 AM
Maybe you are looking for
-
How do I restore from external drive, with outdated back-up software?
I mistakenly kept using my LaCie external drive's Silver keeper softerware after upgrading my Mac OS. My SilverKeeper version is 1.1.2, which I now know is incompatible with my Mac 10.4.6. (Maybe that is why I got an error report when I ran the Mac d
-
Paradox - Should entity CMP EJB be faster than stateless session bean?
Please bear with me on this. Just thinking of this scenario: hello world application. It can be implemented both as stateless session bean and entity bean. There's no persistent. So for the entity bean, it will be container managed, and no code to do
-
ITunes does not detect my ipod
Much to my chagrin I made the mistake of upgrading Windows Vista to Windows 7. Then I d/l the latest version of iTunes and now iTunes can not detect my iPod. I receive the message "The software required for communicating with the iPod is not installe
-
Trading partner receiving two same transactions
Hi B2B Gurus, Our scenario is Division people place the flat files to FTP folder, SOA picks the flat files from FTP and it transforms to EDI XML file and then it place the EDI XML to IP_OUT_QUEUE, B2B picks up the file and it transforms to native EDI
-
Transferring Messeges from Nokia 6300 to New Nokia...
Transferring Messeges from Nokia 6300 to New Nokia 701 Can n e 1 help...????