Datapump - Parallelism is not working
Hello,
I am running 11.1.0.7 on AIX.
I am taking an expdp of an table using the value of 4 for parameter PARALLEL.
expdp SYSTEM TABLES=MYTEST.HISTORY DIRECTORY=EXPORT_FILES DUMPFILE=TEST_HIST_%U.EXPDP.DMP LOGFILE=TEST_HIST.EXPDP.LOG PARALLEL=4But I see only two dumpfile created that too seems like most of the data is going to only one -
ls -ltr
total 286757112
-rw-r----- 1 oracle staff 32768 Jan 17 15:38 TEST_HIST_02.EXPDP.DMP
-rw-r----- 1 oracle staff 19154370560 Jan 17 15:38 TEST_HIST_01.EXPDP.DMPWhy this behaviour? I thought that the data will be distributed to 4 different dumpfiles as I have set it to run in parallel mode and I have 6 CPUs in the box.
Thanks in advance!
This has nothing to do with the parallelism set for the table. DO NOT CHANGE TABLE PARALLELISM for Data Pump. Sorry for the shout, but the table parallelism for the table does not change anything that Data Pump looks at. This suggestion is wrong.
The reason you may only get two dumpfiles is because of many things. First, let me explain how expdp works with parallelism. When expdp start, the first work item to be assigned to a worker process is the export the metadata. The first part of this request is the 'estimation' phase. This phase gets the names of the tables/partitoins/subpartitoins that need to be exported. This information is sent to the MCP process so it can then schedule the data unload. This data unload will start right away if parallel is greater than 1. The worker process that did the estimation now starts unloading metadata. This metadata is written to file #1. In your case, parallel=4 so the MCP will try to split up the data unload that needs to be done. The data can be broken up into 1 or n jobs. Some of the decision on how many jobs to create are based on these factors:
1. Generally exporting data using direct path is n times faster than external tables
2. Direct path does not support parallelism on a single table
this means that worker 2 could get assigned table1 direct path and worker 3 could be assigned table2. Parallelism is
achieved, but by unloading 2 tables at the same time.
3. Some attributes of tables are not supported by direct path, so if a table has those attributes, external table must be
chosen. External table supports parallelism on a single table, but some attributes prohibit single table parallelism.
4. if the table is not larger than xMB then the over head of setting up external table is not worth the parallelism so just
use direct path (parallel 1).
And the list goes on. From what I can see, you had 1 worker exporting metadata and writing it to one file and you had anther worker exporting the data writing to your second file. The data for that table was exported using parallel 1. Not sure why, but because you only had 2 dump files, that is the only scenario I can come up with.
Can you do this and post the results:
Use your expdp command and add estimate=blocks then post the results from the estimate lines. I might be able to tell from that information why you exported the data using parallel 1.
Dean
Similar Messages
-
Level0 Parallel export not working for BSO
Hi,
We have a BSO cube with 15 dimensions, 2 dense and remaining spare. Total number of members are around 9000.
we have added 400 new members to one of the sparse dimension after which parallel export is not happening. I started checking by deleting one by one and exporting it. when I deleted 8 members, export happened fine. If I add one extra member after that, parallel export is not hapenning.
Strange thing is If I add member as child at any generation, export is hapenning fine. If I add a member as sibbling starting from gen 3, export is not working.
Ex: A is dimension, B is child of A. If I add a new member C as sibling to B, parallel export is working.
Now add D as child of C and E as sibling of D. D here is third generation and adding a sibbling hereafter is making parallel export not to work
I'm using simple command - export database 'APP'.'DB' level0 data in columns to data_file "'D:\exp.1.txt'",D:\exp.2.txt'";
If I use single file, export is happening fine.
Does anybody has idea if BSO has member limit at generating levels. Please let me know your thoughts on this.
Thanks,
Swetha
Edited by: Swetha on Dec 16, 2010 3:16 AMDave,
I'm facing the same issue which you faced earlier. Export files are created only with header. Issue here is I'm not getting data in any of those files. All the files has just header. I changed the order of sparse dimensions, my last sparse dimension is the largest sparse with 2250 members. Still export is creating 1kb files with only headers.
The reason to change sparse to dense is we have scenario dimension which has Actual, operating and replan members and data is loaded for all years and all periods for all scenarios. So we thought it can be made dense and we didn't observe much difference in behaviour. Export is also happening fine. One has to analyze their outline structure and data before going for this option.
I would still prefer old outline and appreciate your help on this..
Swetha
Edited by: Swetha on Dec 23, 2010 12:51 AM -
Hi,
I wanted to run a select statement in parallel and it has two indexes for that table.I have changed the degree of both table and index to 16 and I have also specified parallel hint.But even then its not running in parallel.Whats the issue?
this is the query:
select /*+ parallel (a,16) */ * from xxxxx a where date_sys between '20101001' and '20101031' and table_name='telm' and substr(table_key,1,19)='0030000000002048809';
select * from v$px_process;
no rowsSekar_BLUE4EVER wrote:
>
select /*+ parallel (a,16) */ * from archsbi.AUDT_16022011 a where date_sys between
'20101001' and '20101031' and table_name='telm' and
substr(table_key,1,19)='0030000000002048809'
Plan hash value: 581631348
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | | | 873 (100)| |
|* 1 | TABLE ACCESS BY INDEX ROWID| xxxxxxx | 770 | 273K| 873 (56)| 00:00:11 |
|* 2 | INDEX SKIP SCAN | IND1 | 250 | | 832 (58)| 00:00:10 |
Predicate Information (identified by operation id):
1 - filter(SUBSTR("TABLE_KEY",1,19)='0030000000002048809')
2 - access("DATE_SYS">='20101001' AND "TABLE_NAME"='telm' AND
"DATE_SYS"<='20101031')
PLAN_TABLE_OUTPUT
filter("TABLE_NAME"='telm')
>parallel is overkill for less than 1000 rows.
The optimizer is smarter than you! -
Datapump Import is not working
Database version :- 10.2.0.1.0
Platform : -Windows @2003 EE SP 3
I am Trying with datarefresh using data pump. Export was successfull, but import to a different schema is failed.
C:\>expdp hr/hr schemas=hr directory=test_dir dumpfile=hr020608.dmp logfile=hr020608.log
Export: Release 10.2.0.1.0 - Production on Monday, 02 June, 2008 11:29:54
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_SCHEMA_01": hr/******** schemas=hr directory=test_dir dumpfile=hr020608.dmp logfile=hr020608.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."COUNTRIES" 6.093 KB 25 rows
. . exported "HR"."DEPARTMENTS" 6.640 KB 27 rows
. . exported "HR"."EMPLOYEES" 15.77 KB 107 rows
. . exported "HR"."JOBS" 6.609 KB 19 rows
. . exported "HR"."JOB_HISTORY" 6.585 KB 10 rows
. . exported "HR"."LOCATIONS" 7.710 KB 23 rows
. . exported "HR"."REGIONS" 5.296 KB 4 rows
Master table "HR"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for HR.SYS_EXPORT_SCHEMA_01 is:
C:\TEMP\HR020608.DMP
==========================================
Import
===========================================
C:\>impdp scott/tiger schemas=hr directory=test_dir dumpfile=hr020608.dmp logfile=hrimp020608.log
Import: Release 10.2.0.1.0 - Production on Monday, 02 June, 2008 12:58:02
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_SCHEMA_01": scott/******** schemas=hr directory=test_dir dumpfile=hr020608.dmp logfile=hrimp020608.log
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.LOAD_METADATA [TABLE_DATA:"SCOTT"."SYS_IMPORT_SCHEMA_01"]
SELECT process_order, flags, xml_clob, NVL(dump_fileid, :1), NVL(dump_position, :2), dump_length, dump_allocation, grantor, object_row, object_schema, object_long_name, processing_status, processing_s
tate, base_object_type, base_object_schema, base_object_name, property, size_estimate, in_progress FROM "SCOTT"."SYS_IMPORT_SCHEMA_01" WHERE process_order between :3 AND :4 AND processing_state <> :5
AND duplicate = 0 ORDER BY process_order
ORA-03212: Temporary Segment cannot be created in locally-managed tablespace
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 6279
----- PL/SQL Call Stack -----
object line object
handle number name
1DB41780 14916 package body SYS.KUPW$WORKER
1DB41780 6300 package body SYS.KUPW$WORKER
1DB41780 3514 package body SYS.KUPW$WORKER
1DB41780 6889 package body SYS.KUPW$WORKER
1DB41780 1262 package body SYS.KUPW$WORKER
1A8E3D94 2 anonymous block
Job "SCOTT"."SYS_IMPORT_SCHEMA_01" stopped due to fatal error at 12:58:08
Please help me on this, whether it can be possible to the same schema or can be to a different one
Thank you
Johnson P GerorgeORA-03212: Temporary Segment cannot be created in locally-managed tablespace
Cause: Attempt to create a temporary segment for sort/hash/lobs in in permanent tablespace of kind locally-managed
Action: Alter temporary tablespace of user to a temporary tablespace or a dictionary-managed permanent tablespace
Does this database have a default temporary tablespace for all users? -
Hi all
I have a query with the following EXECUTION plan
description
SELECT STATEMENT, GOAL = CHOOSE 116432 23506 2468130
PX COORDINATOR FORCED SERIAL
PX SEND QC (RANDOM) SYS :TQ10008 116432 23506 2468130
FILTER
HASH GROUP BY 116432 23506 2468130
PX RECEIVE 116430 470114 49361970
PX SEND HASH SYS :TQ10007 116430 470114 49361970
HASH JOIN BUFFERED 116430 470114 49361970
BUFFER SORT
PX RECEIVE 87 5065 35455
PX SEND BROADCAST SYS :TQ10003 87 5065 35455
TABLE ACCESS FULL AOSWN PRODUCTS 87 5065 35455
HASH JOIN 116342 470114 46071172
BUFFER SORT
PX RECEIVE 3961 621349 8077537
PX SYS :TQ10004 3961 621349 8077537
TABLE ACCESS FULL AOSWN CONTRACTS 3961 621349 8077537
PX RECEIVE 112380 468906 39857010
PX SEND HASH SYS :TQ10006 112380 468906 39857010
HASH JOIN 112380 468906 39857010
BUFFER SORT
PX RECEIVE 2 38 228
PX SYS :TQ10000 2 38 228
TABLE ACCESS FULL AOSWN TAX_SCHEME_RATES 2 38 228
HASH JOIN 1 12378 468906 37043574
BUFFER SORT
PX RECEIVE 87 5065 55715
PX SYS :TQ10001 87 5065 55715
TABLE ACCESS FULL AOSWN PRODUCTS 87 5065 55715
NESTED LOOPS 112290 513303 34904604
HASH JOIN 76624 513332 29773256
PX RECEIVE 18807 3345998 93687944
PX SEND HASH SYS :TQ10005 18807 3345998 93687944
PX BLOCK ITERATOR 18807 3345998 93687944
TABLE ACCESS FULL AOSWN INSTALMENTS 18807 3345998 93687944
BUFFER SORT
PX RECEIVE 56852 10266630 307998900
PX SYS :TQ10002 56852 10266630 307998900
TABLE ACCESS FULL AOSWN TAX_DUE 56852 10266630 307998900
TABLE ACCESS BY INDEX ROWID AOSWN MEMBER_PRODUCTS 2 1 10
INDEX UNIQUE SCAN AOSWN MEPR_PK 1 1but when tracking the actual execution there are no parallel processes spawned, any idee what to track?
BR,
FlorinCheck this link.
http://blogs.oracle.com/datawarehousing/entry/parallel_execution_precedence_of_hints
Regards
Raj -
XMLTable join causes parallel query not to work
We have a large table, a column stores xml data as binary xmltype storage, and XMLTABLE query is used to extract the data.
If we just need to extract data into a column, and the data has no relation with other data columns, XMLTABLE query is super fast.
Once the data has parent -> children relationship with other columns, the query becomes extremely slow. From the query plan, we could observe that the parallel execution is gone.
I can reproduce the problem with the following scripts:
1. Test scripts to setup
=============================
-- Test table
drop table test_xml;
CREATE table test_xml
( period date,
xml_content xmltype)
XMLTYPE COLUMN xml_content STORE AS SECUREFILE BINARY XML (
STORAGE ( INITIAL 64K )
enable storage in row
nocache
nologging
chunk 8K
parallel
compress;
-- Populate test_xml table with some records for testing
insert into test_xml (period, xml_content)
select sysdate, xmltype('<?xml version = "1.0" encoding = "UTF-8"?>
<searchresult>
<hotels>
<hotel>
<hotel.id>10</hotel.id>
<roomtypes>
<roomtype>
<roomtype.ID>20</roomtype.ID>
<rooms>
<room>
<id>30</id>
<meals>
<meal>
<id>Breakfast</id>
<price>300</price>
</meal>
<meal>
<id>Dinner</id>
<price>600</price>
</meal>
</meals>
</room>
</rooms>
</roomtype>
</roomtypes>
</hotel>
</hotels>
</searchresult>') from dual;
commit;
begin
for i in 1 .. 10
loop
insert into test_xml select * from test_xml;
end loop;
commit;
end;
select count(*) from test_xml;
-- 1024
2. Fast query. Only extract room_id info, the plan shows parallel execution. The performance is very good.
=================================================================
explain plan for
select *
from test_xml,
XMLTABLE ('/searchresult/hotels/hotel/roomtypes/roomtype/rooms/room'
passing xml_content
COLUMNS
room_id varchar2(4000) PATH './id/text()'
) a;
select * from table(dbms_xplan.display());
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8364K| 15G| 548 (1)| 00:00:07 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8364K| 15G| 548 (1)| 00:00:07 | Q1,00 | P->S | QC (RAND) |
| 3 | NESTED LOOPS | | 8364K| 15G| 548 (1)| 00:00:07 | Q1,00 | PCWP | |
| 4 | PX BLOCK ITERATOR | | | | | | Q1,00 | PCWC | |
| 5 | TABLE ACCESS FULL| TEST_XML | 1024 | 2011K| 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 6 | XPATH EVALUATION | | | | | | Q1,00 | PCWP | |
3. The slow query. To extract room_id plus meal ids, no parallel execution. Performance is vert bad.
==============================================================
-- One room can have multiple meal ids
explain plan for
select *
from test_xml,
XMLTABLE ('/searchresult/hotels/hotel/roomtypes/roomtype/rooms/room'
passing xml_content
COLUMNS
room_id varchar2(4000) PATH './id/text()'
, meals_node xmltype path './meals'
) a,
XMLTABLE ('./meals/meal'
passing meals_node
COLUMNS
meals_ids varchar2(4000) PATH './id/text()'
) b;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 68G| 125T| 33M (1)|112:33:52 |
| 1 | NESTED LOOPS | | 68G| 125T| 33M (1)|112:33:52 |
| 2 | NESTED LOOPS | | 8364K| 15G| 676 (1)| 00:00:09 |
| 3 | TABLE ACCESS FULL| TEST_XML | 1024 | 2011K| 2 (0)| 00:00:01 |
| 4 | XPATH EVALUATION | | | | | |
| 5 | XPATH EVALUATION | | | | | |
Is the xml binary storage designed to only solve non-parent-children relationships data?
I would hightly appreciate if someone could help.This problem has been confirmed as an oracle bug, currently the bug is not fixed yet.
Bug 16752984 : PARALLEL EXECUTION NOT WORKING WITH XMLTYPE COLUMN -
DBMS_DATAPUMP parallel parameter did not work
Hi, I am using DBMS_DATADUMP with network link to load large table directly from one database to another database. I used parallel parameter to improve the performance. But looks the parallel did not work. Still only one worker to handle the import.
The status is
Job: LOADING
Owner: TRI
Operation: IMPORT
Creator Privs: TRUE
GUID: 25DB4B2BE420406B82A7AE159CF1E626
Start Time: Wednesday, 22 April, 2009 15:37:04
Mode: TABLE
Instance: orcl
Max Parallelism: 4
EXPORT Job Parameters:
IMPORT Job Parameters:
Parameter Name Parameter Value:
INCLUDE_METADATA 0
TABLE_EXISTS_ACTION TRUNCATE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Worker 1 Status:
Process Name: DW01
State: EXECUTING
Object Schema: TRI
Object Name: ORDER_LINES
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Worker Parallelism: 1
Worker 2 Status:
Process Name: DW02
State: WORK WAITING
My source database is 10.2.0.4 compatible=10.2.0
target database is 11.1.0.6
My table is around 3G
The API I am using is
my_handle := dbms_datapump.open(operation => 'IMPORT',job_mode => 'TABLE',
remote_link => my_db_link, job_name => my_job_name ,version=>'LATEST' ) ;
dbms_datapump.set_parameter (my_handle, 'TABLE_EXISTS_ACTION','TRUNCATE');
dbms_datapump.set_parameter (my_handle, 'INCLUDE_METADATA',0);
dbms_datapump.metadata_filter (handle => my_handle, name => 'SCHEMA_EXPR',
value=>'IN (''ORDER'')');
dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_LIST',
value => '''ORDER_LINES''');
dbms_datapump.metadata_remap(my_handle,'REMAP_SCHEMA',old_value=>'ORDER',value=>'TRI');
dbms_datapump.set_parallel(my_handle,16);
dbms_datapump.start_job(my_handle);
Edited by: tonym on Apr 23, 2009 10:49 AMThen I test to use API and network link to export large table from remote database with parallel parameter.
my_handle := dbms_datapump.open(operation => 'EXPORT',job_mode => 'TABLE',remote_link => my_db_link, job_name => my_job_name ,version=>'LATEST' ) ;
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test1.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test2.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test3.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test4.dmp', directory => 'DATA_PUMP_DIR');
dbms_datapump.metadata_filter (handle => my_handle, name => 'SCHEMA_EXPR',value=>'IN (''INVENTORY'')');
dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_LIST',value => '''INV_TRANSACTIONS''');
dbms_datapump.set_parallel(my_handle,4);
dbms_datapump.start_job(my_handle);
Looks it did not use parallel either. This table is around 3G too.
status:
Job: LOADING_2
Operation: EXPORT
Mode: TABLE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST1.DMP
bytes written: 69,632
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST2.DMP
bytes written: 4,096
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST3.DMP
bytes written: 4,096
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST4.DMP
bytes written: 4,096
Worker 1 Status:
Process Name: DW04
State: WORK WAITING
Worker 2 Status:
Process Name: DW05
State: EXECUTING
Object Schema: INVENTORY
Object Name: INV_TRANSACTIONS
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 4,652,330
Worker Parallelism: 1
Edited by: tonym on Apr 23, 2009 10:53 AM -
Parallel flows not initiating parallely
Hi All,
I have a flowN activity in my BPEL process (PROC1), which creates parallel flows.
Inside each parallel flows I'm calling a another process (PROC2) through partner link.
In my PROC2 I have HumanTask activity, which sends email to the Approver's and waits till that person responds.
The problem is
For EG:
If 2 parallel flows created dynamically, two mails has to send to 2 different approvers(one from each parallel flow) @ same time, but in my case first one flow started its process and the second one starts after the first gets completed.
Could anyone help me how to rectify this.
Thanks
VikiHi Marc,
Thanks for your response.
I found the problem.
In PROC2 , I created a partner link for PROC3 (globally).
From each branch of flowN inside PROC2, I'm invoking PROC3, it seems frm each branch call goes to global PROC3 partner link .So parallel flow not working,it works sequentially.
May be if put the partner link inside the scope or something like I create instance of partner link inside each branch would be the best way to solve my problem.
I created a partner link inside my flowN scope , its giving error like not finding wsdl location.
Tell me is it possible to create Partner link inside scope activity.
Thanks
Viki -
Ctrl/Alt Keys not Working in Parallels
Hi,
I'm running Illustrator CS6 using Parallels Desktop 8 and Windows 7. It appears that the ctrl/alt keys do not work. Has anyone found a solution? I've tried resetting my preferences and restarting my computer. Any help would be greatly appreciaed.Although this is an old thread, I also seem to be having the same sort of issues with Illustrator CS2 in Snow Leopard running under Parallels Desktop 9.
Re. the proposed solution, there does not seem to be a 'Virtual Machine -> Configure -> Options -> Advanced -> Optimize modifier keys for games' option in Desktop 9 - or I've not found it.
I will pursue this in the Parallels forums, but if anyone here has found a solution, I'd be interested to hear it. -
Microphone on Macbook pro does not work in windows 7 with parallels
I recently purchased a Macbook pro with retina display, and installed windows 7 on the macbook with parallels desktop 7. I found out my microphone was not working correctly inside windows, I did bought my laptop to Apple store, at first we thought it was the microphone problem and exchanged me a new laptop, but the issue still exits, so I bought the laptop back to Apple store, this time we find out it seems like the microphone only not working under the windows, it works just fine in Mac part.
Apple direct me to windows telling me that I might need to install a device or driver only windows can provide; then when I contacted window, Windows told me I need to contact apple because the device or deriver must provided by the manufacture of the PC. Both side still direct me back to each other, so I still don't know what deriver or device I needed..........
When I pull up the device manager inside windows, the only device listed under sound is "Parallels Audio Controller (x64)"
Another thing is, the microphone still works inside windows 7, it's just not working correctly. For an example, if I'm doing a live chatting with my friends inside windows, I can hear my friends and they can hear me too, but they cannot hear me very clearly, it seems like there are some kind of negative feedback, it seems like there's an electronic vibrate in my voice.
Can someone please help me
Thank youPost on the Parallels forums: http://forums.parallels.com
-
My Parallels program is now not working since upgrading my MacBook Pro to OS X Mountain Lion. I was wondering if there is a solution to getting this working again?
You just need to update it to the latest version. See:
http://kb.parallels.com/114449 -
Degree in parallel hint is not working properly.
Hi Experts,
I am using the following delete statement.
delete /*+ parallel(t,30) */ from master_header t;
Sometimes the degree 30 is not working.
On what basis we have to give the degree.
Is the degree is session specific?
To use this hint we need to follow any guidelines or check list.
Please help me.
Thanks.does your system have the capacity to have multiple parallel queries with 30 processes running concurrently? (like a Sun 6900 with 48-dual core processors or something. A little 8cpu box will not be able to handle this. That is why it is called a "hint". While you can "hint" in a query, the CBO can (and in this case will) ignore it and will reduce the parallel degree to what it sees the system can handle. It tries to keep developers from doing stupid stuff.
-
Browsers in OS X slow or not working while browsers running in Parallels are fine
From day one, I had massive problems using first Safari and then Chrome in OS X. I would have to refresh ten times or more to get to a webpage. I was advised to create a new user and see if the problem persisted and it didn't. Browsing was perfectly fine. But that does not solve the problem. I need to use the browsers in my user profile, not in one that has not got any of my applications in it. I did rebuild my user profile from scratch, which took a lot of time and I just ended up having the same problems.
Then I purchased Parallels and installed Chrome in there. And bizarrely, it's ok there. Still not as quick as I would expect, but useable.
Another peculiar thing is the choice or router: I bought a very fast Belkin router because initially, I just had a TalkTalk modem/router that came with the broadband. It is very slow and since it is downstairs, it never managed to transmit fully to my office. So, I connected the Belkin to the TalkTalk via Ethernet and was hoping that I would now get better and faster wifi. And that does work for my iPhone, iPad and PC but not for the iMac under OS X. If I go back to my TalkTalk wifi, it does work with the OS X browsers, but it is slow due to weak signal.
I know this is getting confusing, so here is an overview:
browser in OS X browsers in Windows 7 through Parallels
TalkTalk wifi works, but slow works, but slow
Belkin Wifi hardly works at all works and mostly very fast
Following from that, here are a number of statements that should be true:
1. It can't be hardware, since wifi works with the TalkTalk router and even with the Belkin through Windows 7
2. It can't be OS 10.7 in general, since it is working with the TalkTalk even though it does not with the Belkin
3. It can't be the Belkin in general, since the browsers are working through Windows 7
4. It can't be the positioning of either the routers or the iMac, since it works for both routers in certain configurations.
I can only assume that it is some setting in OS 10.7 that is in conflict with the Belkin router.
Now, I can obviously use the internet through Parallels or use the TalkTalk but that is not the point. I paid good money for both the iMac and the Belkin and I think I should be allowed to expect both to work with each other! Or shouldn't I?!?
Anyway, any suggestion would be gratefully received!Please read this whole message before doing anything.
This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
The purpose of this exercise is to determine whether the problem is caused by third-party system modifications that load automatically at startup or login. Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards. Boot in safe mode and log in to the account with the problem. The instructions provided by Apple are as follows:
Be sure your Mac is shut down.
Press the power button.
Immediately after you hear the startup tone, hold the Shift key. The Shift key should be held as soon as possible after the startup tone, but not before the tone.
Release the Shift key when you see the gray Apple icon and the progress indicator (looks like a spinning gear).
Safe mode is much slower to boot and run than normal, and some things won’t work at all, including wireless networking on certain Macs.
The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
Test while in safe mode. Same problem(s)?
After testing, reboot as usual (i.e., not in safe mode) and verify that you still have the problem. Post the results of the test. -
SUM on XMLTable Column Causes Parallel Query to Not Work
I have the following query that creates an XMLTable out of an xml document and is then used to aggregate data from the document.
My problem is that when I try to SUM the XMLTable return column I am unable to use Parallel processing, yet when I remove the SUM from the return column Parallel processing works as expected. Can anyone shed some light on what the problem may be?
One note: I am applying a larger query than the following to hundreds of millions of records so parallel processing is definitely desired/needed.
The Query:
SELECT /*+ full(n) parallel(n,8) */
x451s30 as "XYZ"
--SUM(x451s30) as "XYZ"
FROM NADS n,
metas met,
XMLTable(XMLNAMESPACES('http://fbi.gov/cjis/N-DEx' as "p1"),
'/*' PASSING n.DATA_INTERNAL
COLUMNS
x451s30 NUMBER PATH 'count(/p1:DataItem/Person[Role = "Arrest Subject"]/Name/LastName)') T7
WHERE STATUS_ID=1
AND NAT_SOURCE_ID=0
AND OWNING_ORI like 'ABC123'
and n.id=met.nads_id and met.type_id=1 and met.val='Arrest Report';
Explain Plan without SUM
PLAN_TABLE_OUTPUT
Plan hash value: 2296199318
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 130 | 3713 (0)| 00:00:45 | | | | | |
| 1 | SORT AGGREGATE | | 1 | 2 | | | | | | | |
| 2 | XPATH EVALUATION | | | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 130 | 3713 (0)| 00:00:45 | | | Q1,00 | P->S | QC (RAND) |
| 5 | NESTED LOOPS | | 1 | 130 | 3713 (0)| 00:00:45 | | | Q1,00 | PCWP | |
| 6 | NESTED LOOPS | | 1 | 128 | 3683 (0)| 00:00:45 | | | Q1,00 | PCWP | |
| 7 | PX BLOCK ITERATOR | | 1 | 108 | 3683 (0)| 00:00:45 | 1 | 63 | Q1,00 | PCWC | |
|* 8 | TABLE ACCESS FULL | NADS | 1 | 108 | 3683 (0)| 00:00:45 | 1 | 63 | Q1,00 | PCWP | |
|* 9 | TABLE ACCESS BY INDEX ROWID| METAS | 1 | 20 | 2 (0)| 00:00:01 | | | Q1,00 | PCWP | |
|* 10 | INDEX UNIQUE SCAN | MET_NADS_ID_TYPE_ID_UK | 1 | | 1 (0)| 00:00:01 | | | Q1,00 | PCWP | |
| 11 | XPATH EVALUATION | | | | | | | | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
8 - filter("OWNING_ORI"='ABC123' AND "NAT_SOURCE_ID"=0 AND "STATUS_ID"=1)
9 - filter("MET"."VAL"='Arrest Report')
10 - access("N"."ID"="MET"."NADS_ID" AND "MET"."TYPE_ID"=1)
Note
- dynamic sampling used for this statement (level=4)
29 rows selected.
Expalin Plan with SUM
PLAN_TABLE_OUTPUT
Plan hash value: 1527372262
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 130 | 3713 (0)| 00:00:45 | | |
| 1 | SORT AGGREGATE | | 1 | 2 | | | | |
| 2 | XPATH EVALUATION | | | | | | | |
| 3 | SORT AGGREGATE | | 1 | 130 | | | | |
| 4 | SORT AGGREGATE | | 1 | 130 | | | | |
| 5 | NESTED LOOPS | | 1 | 130 | 3713 (0)| 00:00:45 | | |
| 6 | NESTED LOOPS | | 1 | 128 | 3683 (0)| 00:00:45 | | |
| 7 | PARTITION LIST ALL | | 1 | 108 | 3683 (0)| 00:00:45 | 1 | 63 |
|* 8 | TABLE ACCESS FULL | NADS | 1 | 108 | 3683 (0)| 00:00:45 | 1 | 63 |
|* 9 | TABLE ACCESS BY INDEX ROWID| METAS | 1 | 20 | 2 (0)| 00:00:01 | | |
|* 10 | INDEX UNIQUE SCAN | MET_NADS_ID_TYPE_ID_UK | 1 | | 1 (0)| 00:00:01 | | |
| 11 | XPATH EVALUATION | | | | | | | |
Predicate Information (identified by operation id):
8 - filter("OWNING_ORI"='ABC123' AND "NAT_SOURCE_ID"=0 AND "STATUS_ID"=1)
9 - filter("MET"."VAL"='Arrest Report')
10 - access("N"."ID"="MET"."NADS_ID" AND "MET"."TYPE_ID"=1)
Note
- dynamic sampling used for this statement (level=4)
29 rows selected.
Edited by: drad on May 9, 2012 3:31 AMAfter adding the no_xml_query_rewrite hint I get the PX Coordinator back and parallel processing is enabled; however, this comes at a significant performance impact (query takes 2-3 times longer). It currently appears that Parallel is not worth the cost of loosing the xml query rewrite unless a different indexing scheme can make up the difference.
Apparently the issue was the optimizer rewriting the xml query in the XMLTable which was causing Parallel processing to not be used.
For those interested, the full query is now as follows:
SELECT /*+ full(n) parallel(n,8) no_xml_query_rewrite */
COUNT(*) as TOTAL_RECORDS,
SUM(T7.x422s1) AS ArrestsDateEX,
SUM(T7.x423s2) AS ArrNarAccDesEX,
SUM(T7.x424s3) AS WitNarAccDesEX,
SUM(T7.x425s4) AS WitFirstNameEX,
SUM(T7.x426s5) AS WitFullNameEX,
SUM(T7.x427s6) AS WitLastNameEX,
SUM(T7.x428s7) AS WitMiddleNameEX,
SUM(T7.x429s8) AS WitSexCodeEX,
SUM(T7.x430s9) AS OfficerORIEX,
SUM(T7.x431s10) AS OfficerFirstNameEX,
SUM(T7.x432s11) AS OfficerFullNameEX,
SUM(T7.x433s12) AS OfficerLastNameEX,
SUM(T7.x434s13) AS OfficerMiddleNameEX,
SUM(T7.x435s14) AS AreJuvDisCodeEX,
SUM(T7.x436s15) AS AreUCRArrOffCodeEX,
SUM(T7.x437s16) AS ArrAdtIndicatorEX,
SUM(T7.x438s17) AS AreBirthDateEX,
SUM(T7.x439s18) AS AreEthCodeEX,
SUM(T7.x440s19) AS AreEyeColorEX,
SUM(T7.x441s20) AS AreHairColorEX,
SUM(T7.x442s21) AS ArrHeightEX,
SUM(T7.x443s22) AS AreDvrLicNumEX,
SUM(T7.x444s23) AS AreFBINumEX,
SUM(T7.x445s24) AS ArePassportIDEX,
SUM(T7.x446s25) AS AreSSNEX,
SUM(T7.x447s26) AS AreStateIDEX,
SUM(T7.x448s27) AS AreUSMSIDEX,
SUM(T7.x449s28) AS AreFirstNameEX,
SUM(T7.x450s29) AS AreFullNameEX,
SUM(T7.x451s30) AS AreLastNameEX,
SUM(T7.x452s31) AS AreMiddleNameEX,
SUM(T7.x453s32) AS AreSexCodeEX,
SUM(T7.x454s33) AS AreWeightEX,
SUM(T7.x455s34) AS ArrLocCityEX,
SUM(T7.x456s35) AS ArrLocCountryEX,
SUM(T7.x457s36) AS ArrLocFullAddressEX,
SUM(T7.x458s37) AS ArrLocFullStreetEX,
SUM(T7.x459s38) AS ArrLocStateEX,
SUM(T7.x460s39) AS ArrLocStreetNameEX,
SUM(T7.x461s40) AS ArrLocStreetNumberEX,
SUM(T7.x462s41) AS ArrLocStreetPostdirectionEX,
SUM(T7.x463s42) AS ArrLocSteetPredirectionEX,
SUM(T7.x464s43) AS ArrLocZipEX,
SUM(T7.x465s44) AS OffCmpIndicatorEX,
SUM(T7.x466s45) AS OffDesTextEX,
SUM(T7.x467s46) AS OffDomVioIndicatorEX,
SUM(T7.x468s47) AS OffFrcUseCodeEX,
SUM(T7.x469s48) AS OffGangInvCodeEX,
SUM(T7.x470s49) AS OffHmeInvIndicatorEX,
SUM(T7.x471s50) AS OffIdtTftIndicatorEX,
SUM(T7.x472s51) AS OffMOCCodeEX,
SUM(T7.x473s52) AS OffOffCodeEX,
SUM(T7.x474s53) AS OffTrrIndicatorEX,
SUM(T7.x475s54) AS WrrAgencyEX,
SUM(T7.x476s55) AS WrrAgencyORIEX,
SUM(T7.x477s56) AS WrrCplOrgEX,
SUM(T7.x478s57) AS WrrCplOrgORIEX,
SUM(T7.x479s58) AS WrrCplPersonEX,
SUM(T7.x480s59) AS WrrDesEX,
SUM(T7.x481s60) AS WrrIssAuthorityEX,
SUM(T7.x482s61) AS WrrIssAuthorityORIEX,
SUM(T7.x483s62) AS WrrWrrDateEX
FROM NADS n, metas met, meta_types mtt,
XMLTable(XMLNAMESPACES('http://fbi.gov/cjis/N-DEx' as "p1"),
'/p1:DataItem' PASSING n.DATA_INTERNAL
COLUMNS
x451s30 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Name/LastName)',
x452s31 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Name/MiddleName)',
x453s32 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Sex)',
x454s33 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Weight[PointValue or MaximumValue or MinimumValue])',
x455s34 NUMBER PATH 'count(Location[AssociationReference/AssociationType=Arrest/AssociationReference/AssociationType and AssociationReference/AssociationGUID=Arrest/AssociationReference/AssociationGUID]/City)',
x456s35 NUMBER PATH 'count(Location[Country])',
x457s36 NUMBER PATH 'count(Location/FullAddress)',
x458s37 NUMBER PATH 'count(Location/FullStreetAddress)',
x459s38 NUMBER PATH 'count(Location/State)',
x460s39 NUMBER PATH 'count(Location/StreetName)',
x461s40 NUMBER PATH 'count(Location/StreetNumber)',
x462s41 NUMBER PATH 'count(Location/StreetPostdirection)',
x463s42 NUMBER PATH 'count(Location/StreetPredirection)',
x464s43 NUMBER PATH 'count(Location/PostalCode)',
x465s44 NUMBER PATH 'count(Offense/OtherContent[Info="offense was completed" or Info="offense was attempted"])',
x466s45 NUMBER PATH 'count(Offense/OffenseDescriptionText)',
x467s46 NUMBER PATH 'count(Offense/DomesticViolenceIndicator)',
x468s47 NUMBER PATH 'count(Offense/ForceCategory)',
x469s48 NUMBER PATH 'count(Offense/OtherContent[Info="gang"])',
x470s49 NUMBER PATH 'count(Offense/OtherContent[Info="home invasion"])',
x471s50 NUMBER PATH 'count(Offense/OtherContent[Info="identity theft"])',
x472s51 NUMBER PATH 'count(Offense/MOCrimeAndMotive)',
x473s52 NUMBER PATH 'count(Offense/Offense)',
x474s53 NUMBER PATH 'count(Offense/OffenseTerrorismIndicator)',
x475s54 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityResponsibleOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/Name)',
x476s55 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityResponsibleOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/OrganizationID)',
x477s56 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityInvolvedOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/Name)',
x478s57 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityInvolvedOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/OrganizationID)',
x424s3 NUMBER PATH 'count(Person[Role="Witness"]/WitnessNarrative)',
x425s4 NUMBER PATH 'count(Person[Role="Witness" and string-length(Name/FirstName)>0])',
x426s5 NUMBER PATH 'count(Person[Role="Witness" and string-length(Name/FullName)>0])',
x427s6 NUMBER PATH 'count(Person[Role="Witness" and string-length(Name/LastName)>0])',
x428s7 NUMBER PATH 'count(Person[Role="Witness" and string-length(Name/MiddleName)>0])',
x429s8 NUMBER PATH 'count(Person[Role="Witness" and string-length(Sex)>0])',
x422s1 NUMBER PATH 'count(Arrest/Date)',
x423s2 NUMBER PATH 'count(Arrest/Narrative)',
x430s9 NUMBER PATH 'count(Person[AssociationReference/AssociationType="PersonAssignedUnitAssociation" and AssociationReference/AssociationGUID=Organization[string-length(OrganizationID)>0]/AssociationReference[AssociationType="PersonAssignedUnitAssociation"]/AssociationGUID])',
x431s10 NUMBER PATH 'count(Person[Role="Enforcement Official" and string-length(Name/FirstName)>0])',
x432s11 NUMBER PATH 'count(Person[Role="Enforcement Official" and string-length(Name/FullName)>0])',
x433s12 NUMBER PATH 'count(Person[Role="Enforcement Official" and string-length(Name/LastName)>0])',
x434s13 NUMBER PATH 'count(Person[Role="Enforcement Official" and string-length(Name/MiddleName)>0])',
x435s14 NUMBER PATH 'count(Person[Role="Arrest Subject"]/SubjectJuvenileSubmissionIndicator)',
x436s15 NUMBER PATH 'count(Person[Role="Arrest Subject" and ArrestSubjectUCROffenseCharge])',
x437s16 NUMBER PATH 'count(Person[Role="Arrest Subject"]/TreatAsAdultIndicator)',
x438s17 NUMBER PATH 'count(Person[Role="Arrest Subject"]/BirthDate)',
x439s18 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Ethnicity)',
x440s19 NUMBER PATH 'count(Person[((Role="Arrest Subject") and (EyeColor or EyeColorText))])',
x441s20 NUMBER PATH 'count(Person[((Role="Arrest Subject") and (HairColor or HairColorText))])',
x442s21 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Height[PointValue or MaximumValue or MinimumValue])',
x443s22 NUMBER PATH 'count(Person[Role="Arrest Subject"]/DriverLicenseID/ID)',
x444s23 NUMBER PATH 'count(Person[Role="Arrest Subject"]/FBINumber)',
x445s24 NUMBER PATH 'count(Person[Role="Arrest Subject"]/PassportID/ID)',
x446s25 NUMBER PATH 'count(Person[Role="Arrest Subject"]/SSN)',
x447s26 NUMBER PATH 'count(Person[Role="Arrest Subject"]/StateFingerprintID)',
x448s27 NUMBER PATH 'count(Person[Role="Arrest Subject"]/USMSFugitiveNumber)',
x449s28 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Name/FirstName)',
x450s29 NUMBER PATH 'count(Person[Role="Arrest Subject"]/Name/FullName)',
x479s58 NUMBER PATH 'count(Person[AssociationReference/AssociationType="ActivityInvolvedPersonAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID])',
x480s59 NUMBER PATH 'count(Warrant/Description)',
x481s60 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityInformationClearerOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/Name)',
x482s61 NUMBER PATH 'count(Organization[AssociationReference/AssociationType="ActivityInformationClearerOrganizationAssociation" and AssociationReference/AssociationGUID=Activity[OtherContent/Info="Warrant"]/AssociationReference/AssociationGUID]/OrganizationID)',
x483s62 NUMBER PATH 'count(Warrant/WarrantDate)') T7
WHERE STATUS_ID=1
AND NAT_SOURCE_ID=0
AND OWNING_ORI like 'ABC123'
and n.id=met.nads_id and met.type_id=mtt.id and mtt.name='ReportType' and met.val='Arrest Report';Edited by: drad on May 15, 2012 6:03 AM
Edited by: drad on May 15, 2012 6:04 AM
marking as answered as the above solution worked for me -
Parallel Flow in BPEL not working.
We are using 10.1.3.4.We have a designed a process in bpel where we have parallel flows .Each flow has an invoke activity that calls an external service.
Each partnerlink has property "nonBlockingInvoke" specified.
We have also added the property in bpel.xml as below:
<configurations>
<property name="inMemoryOptimization">true</property>
<property name="completionPersistPolicy">faulted</property>
<property name="completionPersistLevel">All</property>
</configurations>
Now after adding the above property along with property "nonBlockingInvoke" specified for each partnerlink we are getting a timeout error.
once I remove the configuration property it works .Even if we remove the nonBlockingInvoke property then it works.But unfortunately as per our requirement we need parallel processing and inMemory optimisation both.
Any help on this regards will be highly appreciated.
my bpel.xml
<?xml version = '1.0' encoding = 'UTF-8'?>
<BPELSuitcase>
<BPELProcess id="SyncParallelFlow" src="SyncParallelFlow.bpel">
<partnerLinkBindings>
<partnerLinkBinding name="client">
<property name="wsdlLocation">SyncParallelFlow.wsdl</property>
</partnerLinkBinding>
<partnerLinkBinding name="BPELTest1">
<property name="wsdlLocation">BPELTest1Ref.wsdl</property>
<property name="nonBlockingInvoke">true</property>
</partnerLinkBinding>
<partnerLinkBinding name="BPELTest2">
<property name="wsdlLocation">BPELTest2Ref.wsdl</property>
<property name="nonBlockingInvoke">true</property>
</partnerLinkBinding>
</partnerLinkBindings>
<configurations>
<property name="inMemoryOptimization">true</property>
<property name="completionPersistPolicy">faulted</property>
<property name="completionPersistLevel">All</property>
</configurations>
</BPELProcess>
</BPELSuitcase>Well, remember inMemoryOptimization works only for Synchronous-Transient (Without any dehydration in between) processes.
When you set nonBlockingInvoke=true, then each invoke happens in a seperate thread/transaction, which forces dehydration in the process.
That's the reason in your case it's not working.
Hope this explains.
Thanks-
[email protected]
http://www.ibiztrack.com
Maybe you are looking for
-
ProjectError: "Requested resource () is not available
Hey Guys, I have a really strange problem with JSF and Netbeans. My main developing PC is my desktop, there everything is working just fine. (Using: Netbeans 6.1, GlassFish V2, JSF, Ubuntu on both PCs) As soon as I copy my Netbeansproject to my noteb
-
IWeb Bugs and a wish for the future. :)
Hi there, Two issues that I've found moving my photoblog to iWeb, and I'm wondering if they are bugs or issues that the community knows about: 1. I've changed the template I used a lot - font colours, background, etc. Now when I create a new blog ent
-
How to store a file in a database
Is there an easy way to "put" a file (not just the path to it, but the binary data of it...) in a database ? In other words, do exist any classes to handle a file just like a multipart attachment ? And then, in which kind of database data type is bet
-
Xsu12_817.jar missing from 9.2.0.2 XDK download
The archive of xdk 9.2.0.2 from technet does not include the xsu12_817.jar file. It's also missing the equivalent files for previous 8i versions. I've managed to get through xdkload by copying the file from the 8.1.7 XDK but this leaves me with a sep
-
Posting Invoice at MIRO and cancelling it at MR8M
Hello everyone, I have detected an issue at my system. I enter an invoice at MIRO, with reference to a PO, but there is not any GR. When i see the FI Document i can see that it doesn't use the GR account but uses the expenses account... However, then