Process CSV-Files dynamically
Hi all,
I need to process CSV-Files dynamically, i.e. not using a static order for splitting the values of the lines into the according variables, but determining the mapping dynamically using a header line in the CSV-File.
For instance:
#dcu;item;subitem
827160;5111001000;3150
So if the order in the the CSV-File changes to e.g.
#item;dcu;subitem
5111001000;827160;3150
the program should fill the corresponding variables (ls_upload-dcu, ls_upload-item, ls_load-subitem) dynamically without the need to change the code.
If anyone has a solution, please let me know.
Thanks,
Thomas
Try to use variants concepts
Similar Messages
-
How to update csv file dynamically from Catalog manager
HI - In OBIEE we used to generate report statistics into csv files from Catalog manager.But this task should be automatic or dynamic process.So it means that each and every when catalog get changes that information will be captured into CSV file by dynamic way. To achieve this process we should have one script or batch etc.............
Your help is highly appreciated.Yes we can achieve this by enabling Usage Tracking.
Create a report with all the columns you need and schedule the report using Agents with some time frame say like EOD everyday.There we can see what all the changes as done to the catalog.
Do let me know if you need any help to achieve this.
Mark if helps,
Thanks, -
Error while processing csv file
Hi,
i get this messages while processing a csv file to load the content to database,
[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL;
data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
[SSIS.Pipeline] Warning: The output column "Copy of Column 13" (842) on output "Data Conversion Output" (798) and component
"Data Conversion" (796) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
can someone please me with this.
With Regards
litu HereHello Litu,
Are you using: OLEDB Source and Destination ?
Can you change the source and destination provider to ADO.NET and check if the issue still persist?
Mark this post as "Answered" if this addresses your question.
Regards,
Don Rohan [MSFT]
Regards, Don Rohan [MSFT] -
External Table which can handle appending multiple csv files dynamic
I need an external table which can handle appending multiple csv files' values.
But the problem I am having is : the number of csv files are not fixed.
I can have between 2 to 6-7 files with the suffix as current_date. Lets say it will be like my_file1_aug_08_1.csv, my_file1_aug_08_2.csv, my_file1_aug_08_3.csv etc. and so on.
I can do it by following as hardcoding if I know the number of files, but unfortunately the number is not fixed and need to something dynamically to inject with a wildcard search of file pattern.
CREATE TABLE my_et_tbl
my_field1 varchar2(4000),
my_field2 varchar2(4000)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY my_et_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL )
LOCATION (UTL_DIR:'my_file2_5_aug_08.csv','my_file2_5_aug_08.csv')
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;Please advice me with your ideas. thanks.
Joshua..Well, you could do it dynamically by constructing location value:
SQL> CREATE TABLE emp_load
2 (
3 employee_number CHAR(5),
4 employee_dob CHAR(20),
5 employee_last_name CHAR(20),
6 employee_first_name CHAR(15),
7 employee_middle_name CHAR(15),
8 employee_hire_date DATE
9 )
10 ORGANIZATION EXTERNAL
11 (
12 TYPE ORACLE_LOADER
13 DEFAULT DIRECTORY tmp
14 ACCESS PARAMETERS
15 (
16 RECORDS DELIMITED BY NEWLINE
17 FIELDS (
18 employee_number CHAR(2),
19 employee_dob CHAR(20),
20 employee_last_name CHAR(18),
21 employee_first_name CHAR(11),
22 employee_middle_name CHAR(11),
23 employee_hire_date CHAR(10) date_format DATE mask "mm/dd/yyyy"
24 )
25 )
26 LOCATION ('info*.dat')
27 )
28 /
Table created.
SQL> select * from emp_load;
select * from emp_load
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
SQL> set serveroutput on
SQL> declare
2 v_exists boolean;
3 v_file_length number;
4 v_blocksize number;
5 v_stmt varchar2(1000) := 'alter table emp_load location(';
6 i number := 1;
7 begin
8 loop
9 utl_file.fgetattr(
10 'TMP',
11 'info' || i || '.dat',
12 v_exists,
13 v_file_length,
14 v_blocksize
15 );
16 exit when not v_exists;
17 v_stmt := v_stmt || '''info' || i || '.dat'',';
18 i := i + 1;
19 end loop;
20 v_stmt := rtrim(v_stmt,',') || ')';
21 dbms_output.put_line(v_stmt);
22 execute immediate v_stmt;
23 end;
24 /
alter table emp_load location('info1.dat','info2.dat')
PL/SQL procedure successfully completed.
SQL> select * from emp_load;
EMPLO EMPLOYEE_DOB EMPLOYEE_LAST_NAME EMPLOYEE_FIRST_ EMPLOYEE_MIDDLE
EMPLOYEE_
56 november, 15, 1980 baker mary alice 0
01-SEP-04
87 december, 20, 1970 roper lisa marie 0
01-JAN-99
SQL> SY.
P.S. Keep in mind that changing location will affect all sessions referencing external table. -
Issue with processing .csv file
Hi.
I have a simple csv file (multiple rows) which needs to be picked up by the PI File adapter and then processed into a BAPi.
I created a Data type 'Record' which has the column names. Then there is a message type using this particular data type MT_SourceOrder. This message type is mapped to BAPI_SALESORDER_CREATEFROMDAT2. I have done the configuration is the receiver file adapter as well to accept the .csv file
Document Name: MT_SourceOrder
Doc Namespace:....
Recorset Name: Recordset
Recordset Structure: Row,*
Recordset Sequence: Ascending
Recordsets per message: 1000
Key Field Type: Ascending
In the parameter section, the following information has been provided:
Row.fieldNames MerchantID,OrderNumber,OrderDate,Description,VendorSKU,MerchantSKU,Size,UnitPrice,UnitCost,ItemCount,ItemLineNumber,Quantity,Name,Address1,Address2,Address3,City,State,Country,Zip,Phone,ShipMethod,ServiceType,GiftMessage,Tax,Accountno,CustomerPO
Row.fieldSeparator ,
Row.processConfiguration FromConfiguration
However, when the mapping is still not working correctly.
Can anyone please help.
This is very urgent and all help is very much appreciated.
Thanks.
Anuradha SenGupta.HI Santosh.
I have verified the content in the source payload in SXMB_MONI.
<?xml version="1.0" encoding="utf-8"?>
<ns:MT_SourceOrder xmlns:ns="http://xxx.com/LE/PIISalesOrder/">
<Recordset>
<Row>
<MerchantID>PII</MerchantID>
<OrderNumber>ORD-0504703</OrderNumber>
<OrderDate>8/11/2011</OrderDate>
<Description>"Callaway 60"" Women's Umbrella"</Description>
<VendorSKU>4824S</VendorSKU>
<MerchantSKU>CAL5909002</MerchantSKU>
<Size></Size>
<UnitPrice>25</UnitPrice>
<UnitCost>25</UnitCost>
<ItemCount>2</ItemCount>
<ItemLineNumber>1</ItemLineNumber>
<Quantity></Quantity>
<Name>MARGARET A HAIGHT MARGARET A HAIGHT</Name>
<Address1>15519 LOYALIST PKY</Address1>
<Address2></Address2>
<Address3></Address3>
<City>BLOOMFIELD</City>
<State>ON</State>
<Country></Country>
<Zip>K0K1G0</Zip>
<Phone>(613)399-5615 x5615</Phone>
<ShipMethod>Purolator</ShipMethod>
<ServiceType>Standard</ServiceType>
<GiftMessage>PI Holding this cost-</GiftMessage>
<Tax></Tax>
<Accountno>1217254</Accountno>
<CustomerPO>CIBC00047297</CustomerPO>
</Row>
</Recordset>
</ns:MT_SourceOrder>
It looked as above. However, in the message mapping section it doesnt work - when i do Display Queue on the source field it keeps saying the value as NULL.
Please advice.
Thanks.
Anuradha. -
can obiee accept data directly from a csv file for inclusion in the business layer
create a dsn and point it to location of the csv file import it as as regular database.
each sheet shows up as a seperate table
Mark if helps
Thanks, -
How to read/write .CSV file into CLOB column in a table of Oracle 10g
I have a requirement which is nothing but a table has two column
create table emp_data (empid number, report clob)
Here REPORT column is CLOB data type which used to load the data from the .csv file.
The requirement here is
1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
eg: empid report_field1 report_field2
1 x y
Any help would be appreciated.If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
declare
lt_report_clob CLOB;
l_max_line_length integer := 1024; -- set as high as the longest line in your file
l_infile UTL_FILE.file_type;
l_buffer varchar2(1024);
l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
l_filename varchar2(200) := 'my_file_name.csv'; -- get this from somewhere
begin
-- open the file; we assume an Oracle directory has already been created
l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
-- initialise the empty clob
dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
loop
begin
utl_file.get_line(l_infile, l_buffer);
dbms_lob.append(lt_report_clob, l_buffer);
exception
when no_data_found then
exit;
end;
end loop;
insert into report_table (emp_id, report)
values (l_emp_id, lt_report_clob);
-- free the temporary lob
dbms_lob.freetemporary(lt_report_clob);
-- close the file
UTL_FILE.fclose(l_infile);
end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
There is at least one other possibility:
- you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
HTH
Regards Nigel
Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file... -
Use .CSV files to UPDATE
Hi,
I have a requirement to process .CSV files and use the info to UPDATE an existing table. One line from the file matches a row in the existing table by an unique key. There is no need to keep the data from the .CSV files after.
I was planning to use a temporary table with an INSERT trigger that will perform the UPDATE, with ON COMMIT DELETE ROWS, and have sqlldr load the data in this temporary table.
But I found out that sqlldr cannot load into temporary tables (SQL*Loader-280).
What would be other options? The .CSV files are retrieved periodically (every 15 min), have all the same structure, their number can vary, and their filename is unique.
Thank you in advance.SQL*Loader-280 "table %s is a temporary table"
*Cause: The sqlldr utility does not load temporary tables. Note that if sqlldr did allow loading of temporary tables, the data would
disappear after the load completed.
*Action: Load the data into a non-temporary table. Can't you load data into non-temporary table and drop after update? -
Dynamicall create csv files and zip them
Apex version - 4.2
I have some data in the database. I want to generate multiple csv files dynamically from those data. Then zip the csv files together to provide it to the end user.
Can anybody suggest with the approach that could be followed.
Thanx in advance !Find some code to generate the csv, for instance Export and save result set in CSV format using UTL_FILE
Find some code to zip the files/blobs, for instance http://technology.amis.nl/2010/06/09/parsing-a-microsoft-word-docx-and-unzip-zipfiles-with-plsql/
Combine the two pieces
Done -
Dynamically creating oracle table with csv file as source
Hi,
We have a requirment..TO create a dynamic external table.. Whenever the data or number of columns change in the CSV file the table should be replaced with current data and current number of columns...as we are naive experienced people in oracle please give us a clear solution.. We have tried with a code already ..But getting some errors. Code given below..
thank you
we have executed this code by changing the schema name and table name ..Remaining everything same ...
Assume the following:
- Oracle User and Schema name is ALLEXPERTS
- Database name is EXPERTS
- The directory object is file_dir
- CSV file directory is /export/home/log
- The csv file name is ALLEXPERTS_CSV.log
- The table name is all_experts_tbl
1. Create a directory object in Oracle. The directory will point to the directory where the file located.
conn sys/{password}@EXPERTS as sysdba;
CREATE OR REPLACE DIRECTORY file_dir AS '/export/home/log';
2. Grant the directory privilege to the user
GRANT READ ON DIRECTORY file_dir TO ALLEXPERTS;
3. Create the table
Connect as ALLEXPERTS user
create table ALLEXPERTS.all_experts_tbl
(txt_line varchar2(512))
organization external
(type ORACLE_LOADER
default directory file_dir
access parameters (records delimited by newline
fields
(txt_line char(512)))
location ('ALLEXPERTS_CSV.log')
This will create a table that links the data to a file. Now you can treat this file as a regular table where you can use SELECT statement to retrieve the data.
PL/SQL to create the data (PSEUDO code)
CREATE OR REPLACE PROCEDURE new_proc IS
-- Setup the cursor
CURSOR c_main IS SELECT *
FROM allexperts.all_experts_tbl;
CURSOR c_first_row IS ALLEXPERTS_CSV.logSELECT *
FROM allexperts.all_experts_tbl
WHERE ROWNUM = 1;
-- Declare Variable
l_delimiter_count NUMBER;
l_temp_counter NUMBER:=1;
l_current_row VARCHAR2(100);
l_create_statements VARCHAR2(1000);
BEGIN
-- Get the first row
-- Open the c_first_row and fetch the data into l_current_row
-- Count the number of delimiter l_current_row and set the l_delimiter_count
OPEN c_first_row;
FETCH c_first_row INTO l_current_row;
CLOSE c_first_row;
l_delimiter_count := number of delimiter in l_current_row;
-- Create the table with the right number of columns
l_create_statements := 'CREATE TABLE csv_table ( ';
WHILE l_temp_counter <= l_delimiter_count
LOOP
l_create_statement := l_create_statement || 'COL' || l_temp_counter || ' VARCHAR2(100)'
l_temp_counter := l_temp_counter + 1;
IF l_temp_counter <=l_delimiter_count THEN
l_create_statement := l_create_statement || ',';
END IF;
END;
l_create_statement := l_create_statement || ')';
EXECUTE IMMEDIATE l_create_statement;
-- Open the c_main to parse all the rows and insert into the table
WHILE rec IN c_main
LOOP
-- Loop thru all the records and parse them
-- Insert the data into the table created above
END LOOP;The initial table is showing errors and the procedure is created with compilation errors
After executing the create table i am getting the following errors
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "badfile,
byteordermark, characterset, column, data, delimited, discardfile,
disable_directory_link_check, exit, fields, fixed, load, logfile, language,
nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string,
skip, territory, varia"
KUP-01008: the bad identifier was: deli
KUP-01007: at line 1 column 9
ORA-06512: at "SYS.ORACLE_LOADER", line 19 -
How to call a SP with dynamic columns and output results into a .csv file via SSIS
hi Folks, I have a challenging question here. I've created a SP called dbo.ResultsWithDynamicColumns and take one parameter of CONVERT(DATE,GETDATE()), the uniqueness of this SP is that the result does not have fixed columns as it's based on sales from previous
days. For example, Previous day, customers have purchased 20 products but today , 30 products have been purchased.
Right now, on SSMS, I am able to execute this SP when supplying a parameter. What I want to achieve here is to automate this process and send the result as a .csv file and SFTP to a server.
SFTP part is kinda easy as I can call WinSCP with proper script to handle it. How to export the result of a dynamic SP to a .CSV file?
I've tried
EXEC xp_cmdshell ' BCP " EXEC xxxx.[dbo].[ResultsWithDynamicColumns ] @dateFrom = ''2014-01-21''" queryout "c:\path\xxxx.dat" -T -c'
SSMS gives the following error as Error = [Microsoft][SQL Server Native Client 10.0]BCP host-files must contain at least one column
any ideas?
thanks
Hui
--Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --Hey Jakub, thanks and I did see the #temp table issue in our 2008R2. I finally figured it out in a different way... I manage to modify this dynamic SP to output results into
a physical table. This table will be dropped and recreated everytime when SP gets executed... After that, I used a SSIS pkg to output this table
to a file destination which is .csv.
The downside is that if this table structure ever gets changed, this SSIS pkg will fail or not fully reflecting the whole table. However, this won't happen often
and I can live with that at this moment.
Thanks
--Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN -- -
Hi their,
I have multiple data flows doing 90% of the process same. The difference is in source query where clause and target flat file.
I used the global variables to dynamically change the query where clause easily, but I need help in dynamically changing the target flat file (CSV file).
What I want to do is have multiple workflows, which will first set the global variable to new value in the script box and then call the same data flow.
Please let me know if you have any solution or any idea which might put me in the direction to my quest for solution.
thank you,Hi Raj - in your content conversion for lineitem .. read the additional attribute as well.
Change your source strcture and additional field company_code
As you are already sending the filename for each line item using the UDF.. it's just you need to modify your UDF to take another input i.e. Company Code.
or
If your PI version is > 7.1 use the graphical variable to hold the filename..
and while sending the company code information for every item just use the concat function to append filename & company code..
http://scn.sap.com/people/william.li/blog/2008/02/13/sap-pi-71-mapping-enhancements-series-using-graphical-variable -
Read a CSV file and dynamically generate the insert
I have a requirement where there are multiple csv's which needs to be exported to a sql table. So far, I am able to read the csv file and generate the insert statement dynamically for selected columns however, the insert statement when passed as a parameter
to the $cmd.CommandText
does not evaluate the values
How to evaluate the string in powershell
Import-Csv -Path $FileName.FullName | % {
# Insert statement.
$insert = "INSERT INTO $Tablename ($ReqColumns) Values ('"
$valCols='';
$DataCols='';
$lists = $ReqColumns.split(",");
foreach($l in $lists)
$valCols= $valCols + '$($_.'+$l+')'','''
#Generate the values statement
$DataCols=($DataCols+$valCols+')').replace(",')","");
$insertStr =@("INSERT INTO $Tablename ($ReqColumns) Values ('$($DataCols))")
#The above statement generate the following insert statement
#INSERT INTO TMP_APPLE_EXPORT (PRODUCT_ID,QTY_SOLD,QTY_AVAILABLE) Values (' $($_.PRODUCT_ID)','$($_.QTY_SOLD)','$($_.QTY_AVAILABLE)' )
$cmd.CommandText = $insertStr #does not evaluate the values
#If the same statement is passed as below then it execute successfully
#$cmd.CommandText = "INSERT INTO TMP_APL_EXPORT (PRODUCT_ID,QTY_SOLD,QTY_AVAILABLE) Values (' $($_.PRODUCT_ID)','$($_.QTY_SOLD)','$($_.QTY_AVAILABLE)' )"
#Execute Query
$cmd.ExecuteNonQuery() | Out-Null
jyeragiHi Jyeragi,
To convert the data to the SQL table format, please try this function out-sql:
out-sql Powershell function - export pipeline contents to a new SQL Server table
If I have any misunderstanding, please let me know.
If you have any feedback on our support, please click here.
Best Regards,
Anna
TechNet Community Support -
Save a csv file on local host, executing a DTP Open hub in a Process chain?
Hi All,
my client has asked me to try if there is a way to save in the local host c:\ a .CSV file generated from an Open HUB executed by a DTP in a Process chain.
when i execute my DTP it work correctly and it save the .csv file on my c:\ directory.
my client doesn't want to give th users authtorization on RSA1 and want to try if there is a way to put the DTP in a process chain
and when executing the process chain every user can have the file on his own local c:\ directory.
i tried this solution but it doesn't work i get an error on executin the process chain:
Runtime Errors OBJECTS_OBJREF_NOT_ASSIGNED
Except. CX_SY_REF_IS_INITIAL
Error analysis
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_SY_REF_IS_INITIAL', was not
caught in
procedure "FILE_DELETE" "(METHOD)", nor was it propagated by a RAISING clause.
Since the caller of the procedure could not have anticipated that the
exception would occur, the current program is terminated.
The reason for the exception is:
You attempted to use a 'NULL' object reference (points to 'nothing')
access a component (variable: "CL_GUI_FRONTEND_SERVICES=>HANDLE").
An object reference must point to an object (an instance of a class)
before it can be used to access components.
Either the reference was never set or it was set to 'NULL' using the
CLEAR statement.
can you give some advices please?
Thanks for All
BilalHi Bilal,
Unfortunately, DTPs belonging to Open Hubs wich are targeted to a local workstation file, can't be executed through a process chain.
The only way of including such DTP in a process chain is changing the Open Hub so that it writes the output file in the application server. Then, you can retrieve the file -through FTP or any other means- from the application server to the local workstation.
Hope this helps.
King regards,
Maximiliano -
Uploading & Processing of CSV file fails in clustered env.
We have a csv file which is uploaded to the weblogic application server, written
to a temporary directory and then manipulated before being written to the database.
This process works correctly in a single server environment but fails in a clustered
environment.
The file gets uploaded to the server into the temporary directory without problem.
The processing starts. When running in a cluster the csv file is replicated
to a temporary directory on the secondary server as well as the primary.
The manipulation process is running but never finishes and the browser times out
with the following message:
Message from the NSAPI plugin:
No backend server available for connection: timed out after 30 seconds.
Build date/time: Jun 3 2002 12:27:28
The server which is loading the file and processing it writes this to the log:
17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
lfilment) One of the getParameter family of methods called after reading from
the Serv
letInputStream, not merging post parameters>.
Anna Bancroft wrote:
> We have a csv file which is uploaded to the weblogic application server, written
> to a temporary directory and then manipulated before being written to the database.
> This process works correctly in a single server environment but fails in a clustered
> environment.
>
> The file gets uploaded to the server into the temporary directory without problem.
> The processing starts. When running in a cluster the csv file is replicated
> to a temporary directory on the secondary server as well as the primary.
>
> The manipulation process is running but never finishes and the browser times out
> with the following message:
> Message from the NSAPI plugin:
> No backend server available for connection: timed out after 30 seconds.
>
>
>
> --------------------------------------------------------------------------------
>
> Build date/time: Jun 3 2002 12:27:28
>
> The server which is loading the file and processing it writes this to the log:
> 17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
> lfilment) One of the getParameter family of methods called after reading from
> the Serv
> letInputStream, not merging post parameters>.
>
>
It doesn't make sense? Who is replicating the file? How long does it
take to process the file? Which plugin are you using as proxy to the
cluster?
-- Prasad
Maybe you are looking for
-
Questions on FP1, FP2, Belle Refresh and availabil...
Hello, First things first: Note that my post is not intended to ask when Belle Refresh will be available for my phone. I have been reading a lot about this topic, on this and many other forums, and know by now that I need to be patient and wait for B
-
HR Reporing Payroll using Logical Database PNPCE
Hi ALL Can any body explain me how can we do HR Payroll reporing using logical database PNPCE. In the program attibutes as mentioned we need to scee 900 for payroll reporting, but when I use PNPCE I cant see that screen in the Dropdown. can any body
-
My T400 Monitor is flickering !!!
Hi people.. I had recently got Thinkpad T400 (3 months back).These days for reasons unknown,my screen is flickering at times. Please help.I don't know what the problem is.
-
Error Report while trying to Sync!
Iam receiving the following error report whenever I am trying to sync: The iPhone "Jason's iPhone" cannot be synced. The required file cannot be found. I had my iPhone replaced this week and it has been displaying the above message ever since I have
-
Width of page is 100%, but overflow
I apologize if the terminology is incorrect. Need help quickly.. www.jacobskoglund.com is my website. What happens is I'm able to use 100% width just fine as it re-adjusts my images correctly. But the problem is when I make my page say half of my win