Querying oracle cursor cache
When I create a sql tuning set, the OEM gives me the option to use the cursor cache. I am trying to figure out what views oracle joins to get the sql statements and them to specific schemas. I know about v$sql.
Hi,
Checks from V$SQLAREA
- Pavan Kumar N
Similar Messages
-
10.1.3.3.2
Can its below save in a table or where logs are saved file?
it needing for history information.
http://localhost:9704/analytics/saw.dll?Sessions
[SESSION]
User ID Host Address Session ID Browser Info Logged On Last Access
[CURSOR CACHE]
D User Refs Status Time Action Last Accessed Statement Information
Thanks.
in advance.Will Usage Tracking help you?
You will find it in the Documentation: Oracle® Business Intelligence Server Administration Guide: Chapter 10 Administering the Oracle BI Server Query Environment.
Regards,
Stefan Hess
http://download.oracle.com/docs/cd/E10415_01/doc/bi.1013/b31770.pdf -
ORA-02103: PCC: inconsistent cursor cache
I have been hit by one error ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref). it occur when user execute thousand of update statement in one go. I have place commit after 50 records. But problem is still there. It shutdown the oracle. I have to again startup oracle database. I’m running query from Toad. Server machine remotely connected with Toad using TNS. Why this error occur or plz guide what is the best way to update records. I have also used parallel Hint. Committing after 50 record reduce error occurrence but problem not solve completely
As per ora description:
Error: SQL 2103
Text: Inconsistent cursor cache (out-of-range CUC ref)
Cause: The precompiler generates a unit cursor entry (UCE) array. An element
in this array corresponds to an entry in the cursor cache (CUC). While
doing a consistency check on the cursor cache, SQLLIB found that the
UCE array contains an ordinal value that is either too large or less
than zero. This happens only if your program runs out of memory.
Action: Allocate more memory to your user session, then rerun the program. If
the error persists, call customer support for assistance.
How user is connected dedicated or shared? How much memory is used during update? Is it enough?
Is parameter open_cursors high enough?
And as You can see from error description - call Oracle support. Raise SR to Oracle - they will investigate and ask dump files and look through them and propose a solution. -
Hi All,
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
I will not be able to share the query due to company policy.
OEM plan showing as Merge cartesain for the query, I know the plan is not correct, as the query has incorrect number of cardinality. I have SQL profile set on this query:
OEM shows as :
Data Source : Cursor Cache
Additional Information : 'SYS_SQL_PROFXXXXXX' (X is some number)
Here is what is happening:
1. The table where the merge join is purged Daily (EOD i.e. 12 AM ), that means it has no rows.
2.Morning around 4 am one process will populate this table, and the same process further uses this table in a query, the query plan has merge Join cartesain (MJC), and it comes out as the number of rows is very less.
3. Next around 6am again that process is triggered, this time it has huge number of rows, and again the query picks up the same MJC plan, and this time query executes for hours, as it has incorrect cardinality. When I again run SQL advisory on this query, it shows up an optimized plan, I kill the process and re-run the process again, and it works fine (query is out within 3 seconds)
Guess it still picks up the previous plan of merge join @6am where the number of rows are less, from the cursor cache, and the OEM also shows data source as Cursor Cache. Can we invalidate the session cache if this is the case.
Please help how can we handle this one?I think you are addressing a common problem in datawarehouses... there are staging tables, some times empty, some times with millions of rows... so, maybe the statistics are not reallistic... What is the result of the following query:
select num_rows, last_analyzed from dba_tables where table_name = '<your_table>';
If this is the problem, you should to consider one of the following strategies:
1) Analize the table when is "full" and assure that never runs an analize table (or a gather_schema_stats) over this table. This strategy works fine if all days the table is populated with similar data... but maybe you need to change a gather_schema_stats job schedule... you should be aware of when and how the statistics are updated
2) Populate the table, then run a gather_table_stats over the table, wait for the completion of the gathertable_stats_, and finally trigger the 6am process... maybe you need to schedule the process before 6am because the statistics gather process
I hope this helps
Regards,
Alfonso -
Using Oracle CURSOR in SAP PI 7.0
Hi All,
I have a scenario in which I need to get the data from Oracle database using cursor, but I couldnu2019t find a direct method in PI to do the same using the JDBC sender channel.
Can anyone tell me how we can access Oracle cursors in SAP PI?
I have the work around like using temporary tables in the Oracle DB but I would like to know how we can use cursor
I am using SAP PI 7.0 EHP 1
Thanks in advanceu2026
Regards,
Manish AntonyHi Krish,
I need to retrieve more than one record at a time and the oracle stored procedure cannot return more than one record like SQL server. The way to get more than one record from Oracle Stored procedure is through cursors.
Also I may not be able to use standard query because I have a complex logic to retrieve the data (I need to get the data from multiple main tables and look up tables based on different conditions and also need to update the tables to avoid data duplication in the target system) which cannot be put in to the select and update statements in the communication channel.
Regards,
Manish Antony -
ADF application integrating with Oracle Web Cache
Hello,
I am trying to integrated my ADF 11g application with Oracle Web Cache. I used this link http://andrejusb.blogspot.com/2010/06/oracle-webtier-11g-configuration-for.html for it.
I am able to access my ADF application using webcache port 7785.
I created few caching rules in the Oracle Web Cache. And in the popular request section of the Oracle Web cache i see jpg,png and other image files cached.
But the issue is when the application access images like /testapp/test/images/abc.jpg?_adf.ctrl-state=5b0s7lzfo_29 . I created a caching rule with regular expression ^/testapp/test/images/[A-Za-z0-9_]*\.(gif|jpeg|png|jpg)\?_adf\.ctrl-state=[A-Za-z0-9_]*$.
But when i access the popular request in em i don't see the URL given above as cached. The caching reason it specifies as URL contains query string.
I am not sure if i need to do anything additional to cache these URL's as well.
Thanks!
RamYes that works. But my question is how to cache the urls which has querystring. I was trying to give a regular expression to match the url so that the url which contains parameters like _afrLoop which changes with each HTTP request can also be cached.
-
ORA-13773: insufficient privileges to select data from the cursor cache
We are trying to create STS using the below query:
exec sys.dbms_sqltune.create_sqlset(sqlset_name => 'TEST_STS', -
sqlset_owner => 'SCOTT');
The below procedure will load sql starting with 'select /*MY_CRITICAL_SQL*/%' from cursor cache into STS TEST_STS.
DECLARE
stscur dbms_sqltune.sqlset_cursor;
BEGIN
OPEN stscur FOR
SELECT VALUE(P)
FROM TABLE(dbms_sqltune.select_cursor_cache(
'sql_text like ''select /*MY_CRITICAL_SQL*/%''',
null, null, null, null, null, null, 'ALL')) P;
dbms_sqltune.load_sqlset(sqlset_name => 'TEST_STS',
populate_cursor => stscur,
sqlset_owner => 'SCOTT');
END;
We were getting the following error: ORA-13761: invalid filter
After granting the below privileges to the user we are getting the below error:
Err msg:
ERROR at line 1:
ORA-13773: insufficient privileges to select data from the cursor cache
ORA-06512: at "SYS.DBMS_SQLTUNE", line 2957
ORA-06512: at line 10
For SQL Tuning Sets:
GRANT ADMINISTER ANY SQL TUNING SET TO scott;
For Managing SQL Profiles:
GRANT CREATE ANY SQL PROFILE TO scott;
GRANT ALTER ANY SQL PROFILE TO scott;
GRANT DROP ANY SQL PROFILE TO scott;
For SQL Tuning Advisor:
GRANT ADVISOR TO scott;
Others:
GRANT SELECT ON V_$SQL TO SCOTT;
GRANT SELECT ON V_$SQLAREA TO SCOTT;
GRANT SELECT ON V$SQLAREA_PLAN_HASH TO SCOTT;
GRANT SELECT ON V_$SQLSTATS TO SCOTT;
grant select on sys.DBA_HIST_BASELINE to SCOTT;
grant select on sys.DBA_HIST_SQLTEXT to SCOTT;
grant select on sys.DBA_HIST_SQLSTAT to SCOTT;
grant select on sys.DBA_HIST_SQLBIND to SCOTT;
grant select on sys.DBA_HIST_OPTIMIZER_ENV to SCOTT;
grant select on sys.DBA_HIST_SNAPSHOT to SCOTT;
Any info from your end to resolve the issue will be of great help.
ThanksWhat is the alert log reporting. Are you seeing any other errors than these in the alert log too?
-
Querying the toplink cache under high-load
We've had some interesting experiences with "querying" the TopLink Cache lately.
It was recently discovered that our "read a single object" method was incorrectly
setting query.checkCacheThenDB() for all ReadObjectQueries. This was brought to light
when we upgraded our production servers from 4 cores to 8. We immediatly started
experiencing very long response times under load.
We traced this down to the following stack: (TopLink version 10.1.3.1.0)
at java.lang.Object.wait(Native Method)
- waiting on <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at java.lang.Object.wait(Object.java:474)
at oracle.toplink.internal.helper.ConcurrencyManager.acquireReadLock(ConcurrencyManager.java:179)
- locked <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at oracle.toplink.internal.helper.ConcurrencyManager.checkReadLock(ConcurrencyManager.java:167)
at oracle.toplink.internal.identitymaps.CacheKey.checkReadLock(CacheKey.java:122)
at oracle.toplink.internal.identitymaps.IdentityMapKeyEnumeration.nextElement(IdentityMapKeyEnumeration.java:31)
at oracle.toplink.internal.identitymaps.IdentityMapManager.getFromIdentityMap(IdentityMapManager.java:530)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.checkCacheForObject(ExpressionQueryMechanism.java:412)
at oracle.toplink.queryframework.ReadObjectQuery.checkEarlyReturnImpl(ReadObjectQuery.java:223)
at oracle.toplink.queryframework.ObjectLevelReadQuery.checkEarlyReturn(ObjectLevelReadQuery.java:504)
at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:564)
at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
We moved the query back to the default, query.checkByPrimaryKey() and this issue went away.
The bottleneck seemed to stem from the read lock on the CacheKey from IdenityMapKeyEnumeration
public Object nextElement() {
if (this.nextKey == null) {
throw new NoSuchElementException("IdentityMapKeyEnumeration nextElement");
// CR#... Must check the read lock to avoid
// returning half built objects.
this.nextKey.checkReadLock();
return this.nextKey;
We had many threads that were having contention while searching the cache for a particular query.
From the stack we know that the contention was limited to one class. We've since refactored that code
not to use a query in that code path.
Question:
Armed with this better knowledge of how TopLink queries the cache, we do have a few objects that we
frequently read by something other than the primary key. A natural key, but not the oid.
We have some other caching mechanisms in place (JBoss TreeCache) to help eliminate queries to the DB
for these objects. But the TreeCache also tries to acquire a read lock when accessing the cache.
Presumably a read lock over the network to the cluster.
Is there anything that can be done about the read lock on CacheKey when querying the cache in a high load
situation?CheckCacheThenDatabase will check the entire cache for a match using a linear search. This can be efficient if the cache is very large. Typically it is more efficient to access the database if your cache is large and the field you are querying in the database is indexed in the table.
The cache concurrency was greatly improved in TopLink 11g/EclipseLink, so you may wish to try it out.
Supporting indexes in the TopLink/EclipseLink cache is something desirable (feel free to log the enhancement request on EclipseLink). You can simulate this to some degree using a named query and a query cache.
-- James : http://www.eclipselink.org -
Administration Tool Cache vs. Cursor Cache
Hi everyone,
Someone asked me what's the difference between the cache in the administration tool ( Manage->Cache) versus cursor cache (Settings -> Administrator -> Manage Sessions), and even though I've cleared them both many-a-time, I still am not sure the difference.
Can someone explain to me the difference between the two?
Thanks!
-JoeHi,
The cache in the administration tool is a file based cache on the OBIEE server which stores the results of database requests. This means that if a user makes a request the OBIEE server first checks the cache to see if the query has already been run and cached, or if a superset of the query has been run and cached (i.e. a less restrictive query that the current query can be satisfied from). If it finds there is a cache entry then it will return the results from here instead of issuing any SQL to the database, thereby speeding up getting the results back to the user.
The cache shown in the cusrsor cache is the cache on the presentation server, this is a cache of the content which is being returned to the user's browser, this means if the user goes back to see results for a query they have already made then the presentation server can just return the same content to them without even having to go to the OBIEE server again at all.
So basically 2 levels of caching, one on the OBIEE server and one on the presentation server.
Regards,
Matt -
i want to ask how to keep a query in buffer cache always?
a query which runs frequently in a database will always stay in buffer cache due to LRU. LRU will not allow that query to flush out form buffer cache as it is running frequently.but if i want to place a query explicitly in buffer cache what should i do?please answer it soon..i have recently completed the ORACLE DBA courseware and in an interview the interviewer asked me that if he want to place a query explicitly in buffer cache what will i do?
as a fresher my knowledge is limited and i am trying to improve it continuously..
while the interviewer was a senior oracle DBA, according to my knowledge i told him that we can keep objects like tables indexes in keep cache by declaring so or if the same query is running continuously it will automatically remain in LRU .putting a query is a new stuff i was searching for it but i wasn't getting answer for it so finally i decided to put this question on forum...
i am trying hard to get a job as a DBA but all in vain i am nt getting one really it's becoming frustrating for me i have great hopes on oracle and besides hope i like learning oracle from beginning of my graduation thats y i decided to make my career in oracle and specially a DBA...
now let's see what happens.... -
Inconsistent cursor cache error still presisting
Hey I have asked this question before but no solution has been provided me. I have been facing a very serious problem. I have oracle 11g on windows 2003 on 64 bit machine. With 2 processor 8GB RAM. Sga/pga automatically features turn on and 6GB assign to memory_max_target, memory_target . My problem is when I run update 600 hundred statements in one go and every update statement update record between 1 to 30000 and after 20 records I use commit. But after 20 to 30 record oracle go down (shutdown) when i check alert log It advise to check Trace file and in trace file I got this error ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref). I turn on my cache sharing to force but no way. Is there any other fast way to update records??
cursor_sharing string force
cursor_space_for_time boolean FALSE
open_cursors integer 300
session_cached_cursors integer 50
my update statement are following
Update jg_6july_dg0 Set Operator_code= '915724' where Operator_code= '015325';
Update jg_6july_dg0 Set Operator_code= '915715' where Operator_code= '015323';
.Update jg_6july_dg0 Set Operator_code= '915712' where Operator_code= '015374';
Cursor Caching I think is not a problem either it is a oracle bug or I’m doing something wrong. I can’t believe that oracle does not have solution for such little problem. My question is why oracle shutdown?? Waiting for you replysir no such error i got nothing. sir proces of rqasing SR is quite cumbersome. i'm stuck where SR first page ask Type of Problem it is hide how i set its value one more strange thing i have change the setting of SGA or PGA now the error in my alter log has been changed i have pasted here some part of alterlog plz check and told me what should i do now.
Checkpoint not complete
Current log# 2 seq# 1844 mem# 0: E:\APP\ADMINISTRATOR\ORADATA\ORCL\ONLINELOG\O1_MF_2_4VH0YMCK_.LOG
Current log# 2 seq# 1844 mem# 1: E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ONLINELOG\O1_MF_2_4VH0YMLC_.LOG
Fri Apr 03 19:15:50 2009
Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_lgwr_5088.trc (incident=164082):
ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by 'inst 1, osid 3604'
Incident details in: e:\app\administrator\diag\rdbms\orcl\orcl\incident\incdir_164082\orcl_lgwr_5088_i164082.trc
Killing enqueue blocker (pid=3604) on resource CF-00000000-00000000
by killing session 545.1
Killing enqueue blocker (pid=3604) on resource CF-00000000-00000000
by terminating the process
LGWR (ospid: 5088): terminating the instance due to error 2103
Fri Apr 03 19:15:51 2009
Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_j000_4256.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref)
Fri Apr 03 19:15:52 2009
Errors in file e:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_j001_4764.trc:
ORA-02103: PCC: inconsistent cursor cache (out-of-range cuc ref)
Instance terminated by LGWR, pid = 5088Fri Apr 03 19:25:08 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =61
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile E:\APP\ADMINISTRATOR\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEORCL.ORA
System parameters with non-default values:
processes = 500
sessions = 555
sga_max_size = 5G
nls_length_semantics = "BYTE"
resource_manager_plan = ""
sga_target = 5G
memory_target = 0
memory_max_target = 7360M
control_files = "E:\APP\ADMINISTRATOR\ORADATA\ORCL\CONTROLFILE\O1_MF_4VH0YL9L_.CTL"
control_files = "E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\CONTROLFILE\O1_MF_4VH0YLF0_.CTL"
db_block_size = 16384
compatible = "11.1.0.0.0"
db_files = 7000
db_create_file_dest = "E:\app\Administrator\oradata"
db_recovery_file_dest = "E:\app\Administrator\flash_recovery_area"
db_recovery_file_dest_size= 2G
undo_tablespace = "UNDOTBS1"
undo_retention = 900
sec_case_sensitive_logon = FALSE
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=orclXDB)"
audit_file_dest = "E:\APP\ADMINISTRATOR\ADMIN\ORCL\ADUMP"
audit_trail = "DB"
db_name = "orcl"
open_cursors = 300
pga_aggregate_target = 2112M
enable_ddl_logging = FALSE
aq_tm_processes = 0
diagnostic_dest = "E:\APP\ADMINISTRATOR"
Fri Apr 03 19:25:09 2009
PMON started with pid=2, OS id=2752
Fri Apr 03 19:25:09 2009
VKTM started with pid=3, OS id=1252 at elevated priority
VKTM running at (20)ms precision
Fri Apr 03 19:25:09 2009
DIAG started with pid=4, OS id=2596
Fri Apr 03 19:25:09 2009
DBRM started with pid=5, OS id=1436
Fri Apr 03 19:25:09 2009
PSP0 started with pid=6, OS id=5104
Fri Apr 03 19:25:09 2009 -
How to Query a Cursor and retrieve a selected records
Hi,
I'm using a ScrollableCursor for pagination and it is working fine. Now i want to filter that cursor to get a selected records So i need to Query a cursor. Is it possible to Query a Cursor using Oracle Toplink.
looking an api i found
cursor.setSelectionCriteriaClone(expression)
method is this method useful to apply the expression(Query) to the cursor. If it is possible then please send me the sample code.
ThanksYou cannot filter a Cursor, you would need the execute a new query (you could base your new query on the cursor's query's secltionCriteria though).
James : http://www.eclipselink.org -
Aggregate query on global cache group table
Hi,
I set up two global cache nodes. As we know, global cache group is dynamic.
The cache group can be dynamically loaded by primary key or foreign key as my understanding.
There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
Oracle:
1 Java
2 C
3 Python
Node A:
1 Java
Node B:
2 C
3 Python
If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
The questions are:
how I can get the real count 3?
Is it reasonable to do this query on global cache group table?
I have one idea that create another read-only node for aggregation query, but it seems weird.
Thanks very much.
Regards,
Nesta
Edited by: user12240056 on Dec 2, 2009 12:54 AMDo you mean something like
UPDATE sometable SET somecol = somevalue;
where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
Chris -
Cursor cache - Time. what is this time
Administration -> manage session -> Cursor cache - > Time.
I have a question about this time?
I ran a report and viewed its log through(Administration -> manage session -> Cursor cache-> View Log) . This report had no previous cache entries, because I cleared them all.
The time shown for this report under (Administration -> manage session -> Cursor cache - > Time) says 18 seconds.
I am sure when I clicked on the tab that has this report it took less than 4 seconds for the page to load with this report on it.
So I wasn't sure what this time actually is? When I look into the log ( for this particular report), one of the line at the end of the log says
[2012-03-09T15:50:04.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-33] [] [ecid: d01cd216d41a2bc8:bf26dbb:13549056e05:-8000-00000000005b8cad] [tid: 44ded940] [requestid: 7ee0096] [sessionid: 7ee0000] [username: -2327690837] -------------------- Logical Query Summary Stats: Elapsed time 23, Response time 18, Compilation time 1 (seconds) [[
But the report gave back results surely in less than 18 seconds, so what this time indicates?Hi I had this on my bank statement too. I believe TCCP stands for Town Centre Car Parks. I parked near the Royal Armouries for several hours which cost me £10. If it had said carpark on my statement it would have saved me a lot of exploring on google as I, like you had forgotten what the charge was for. Hope this helps.
The carparks run by TCCP are as follows and you may have visited one of them and payed by debit card:
Merrion Centre & First Direct Leeds Arena
Templar Street & Edward Street, Leeds
7 Whitehall Road, Leeds
9 Whitehall Road, Leeds
Clarence Dock, Leeds
1 Port Street, Manchester
33 Tariff Street, Manchester
30 Tariff Street, Manchester
21 Ducie Street, Manchester
75 Dale Street, Manchester -
DICTIONARY LOOKUP CURSOR CACHED
Hi,
Can anyone please tell me what is this 'DICTIONARY LOOKUP CURSOR CACHED' cause I found that my ADF application uses to many off this cursors when the application is running and over loads my Database too fast.
Can any one please tell me the purpose for this cursors being created and how.
My environment is as follows,
Database : oracle 11g
Server : Weblogic 10My database tells me you are posting to the wrong forum:
CHE_TEST@tcp_asterix_impl> r
1 select case when 'ADF' = 'FORMS'
2 then 'correct forum'
3 else 'incorrect forum' end
4* from dual
CASEWHEN'ADF'='
incorrect forum
CHE_TEST@tcp_asterix_impl>you also may want to try google first.
cheers
Maybe you are looking for
-
Hi All Hi All I have configured LSMW in Quality for which i have only 1 Project under which there are 13 sub Project & under each sub project there is only one Object . Totally there are 13 objects . Now i want to transport all of them in to a produ
-
How to Install Tiger and keep Classic?
I have a Powermac G4 Dual Processor with 10.2.8 and Classic. I want to install Tiger, but keep Classic too. Should I do an Archive and install? Will it wipe out my Classic installation? (I have a LaCie 250GB drive to do a backup onto first.) Thanks G
-
Fmx is not running on brower with out form builder in system
Hi Members i am using oracle database 10g(10.1.0.2.0) Operating system :windows xp browser: internet explorer 6 form : Forms [32 Bit] Version 10.1.2.0.2 (Production) reports also same version... i can able to compile and run my forms through oracle f
-
I was charged $1.00 on several songs that I purchased which were priced at $0.69. I was ****** to discover that iTunes overcharged me, and if I could, I would return the songs on principle. I want to know why I was overcharged.
-
How do we fix the following problem? We had input VAT on motor vehicles until period 10, but we can not claim the VAT. How do we reverse this and process correctly? For example We have Dt Motor vehicles 100 VAT Input 20