Bulk cacheStore()? Bulk Db updates
Good Afternoon,
We have recently implemented a write behind cache for our site that persists to our most active Db table. The write behind behavior has been great at limiting the number of DB transactions by bulking the transactions for a single entity in the cache.
Our DBA has asked us to take this one step further. The improvement so far only accommodate for each item in the table bulking updates to that one row. he would like updates over many objects/rows to be bulked.
He would like the updates to the DB bulked using Bulk SQL to perform these updates (i.e. 1000 at a time) via a single transaction. We have had lenghty discussion about how this could be implemented.
One: Have the original cache store to a second cache. This second cache will be dumped and processed via a JDBC array using Bulk SQL after x amount of time.
Two: Pipe the cached values into a JMS queue that could process 100 at a time.
Before we go down this road too much further I have been looking through documentation trying to find if Tangasol has a solution for this scenario already or perhaps even a different approach that I could propose to increase our DB performance and scalability.
Thanks for any advice!
-- Grant
Hi Robert,
Perhaps I am not understanding the write being implementation correctly. I thought that the write-delay specified the amount of inactive time on a specific key/value pair that would then trigger a store() on that SPECIFIC cache entry. Are you stating that after 1 minute ALL entries in the cache are written or is that a different configuration from what i have below?
You also state that I can "group these writes together into a single batch update ". Is this a Coherence configuration option or are you suggesting I implement the storeAll() to formulate the values into some batch update statement. Which is starting to sound like a good idea as i type it :)
Thanks again for your input!
<distributed-scheme>
<scheme-name>shopping-basket-cache-scheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<write-delay>1m</write-delay>
<scheme-name>ShoppingBasketRWBMScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>basket-eviction</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.cache.coherence.shoppingbasket.ShoppingBasketCacheStore</class-name>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>basket-eviction</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<high-units>5000</high-units>
<expiry-delay>60m</expiry-delay>
<flush-delay>60m</flush-delay>
</local-scheme>
Similar Messages
-
Re: ORA 06550 in FOR ALL INSERT...Please help me ..
DECLARE
CURSOR c1 IS SELECT fs.user_id, fs.lot_id, ts.ml_ac_no, ts.td_prs_dt, ts.unit_cost, ts.cost_basis
FROM tb_xop_sharelot_fraction_snap fs, tb_xop_sharelot ts
WHERE fs.lot_id=ts.lot_id AND fs.user_id=ts.user_id;
Type ty_tab1 is table of C1%rowtype index by PLS_INTEGER;
ltab1 ty_tab1;
BEGIN
OPEN C1;
LOOP
FETCH C1 BULK COLLECT INTO ltab1 LIMIT 5000;
LOOP
FORALL i in ltab1.first ..ltab1.last
Update tb_xop_sharelot_fraction_snap
set ml_ac_no=ltab1(i).ml_ac_no
,td_prs_dt=ltab1(i).td_prs_dt
,unit_cost=ltab1(i).unit_cost
,cost_basis=ltab1(i).cost_basis;
commit;
END LOOP;
EXIT WHEN C1%NOTFOUND;
END LOOP;
CLOSE C1;
DBMS_OUTPUT.PUT_LINE('ml_ac_no, td_prs_dt, unit_cost and cost_basis columns are updated successfully:'||ltab1.count);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(SQLCODE|| ' ' ||SQLERRM);
END;
{ORA-06550: line 21, column 37:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
ORA-06550: line 21, column 37:
PLS-00382: expression is of wrong type
ORA-06550: line 20, column 36:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
ORA-06550: line 20, column 36:
PLS-00382: expression is of wrong type
ORA-06550: line 19, column 56:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
ORA-06550: line 19, column 56:
PLS-00382: expression is of wrong type
ORA-06550: line 18, column 46:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
ORA-06550: line 18, column 46:
PLS-00382: expression is of wrong type
ORA-06550: line 21, column 37:
PL/SQL: ORA-22806: not an object or REF
ORA-06550: line 17, column 16:
PL/SQL: SQL Statement ignored
{Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi this version}Hoek wrote:
Some more detail: http://www.oracle-developer.net/display.php?id=410
Creating object type + using treat seems like an overkill to simply creating individual collections:
SQL> declare
2 cursor v_cur is select empno,ename,sal from emp where deptno = 10;
3 type v_tbl_type is table of v_cur%rowtype index by pls_integer;
4 v_tbl v_tbl_type;
5 begin
6 select empno,ename,sal
7 bulk collect
8 into v_tbl
9 from emp
10 where deptno = 10;
11 forall i in 1..v_tbl.count
12 update tbl
13 set name = v_tbl(i).ename,
14 sal = v_tbl(i).sal;
15 end;
16 /
sal = v_tbl(i).sal;
ERROR at line 14:
ORA-06550: line 14, column 19:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
table of records
ORA-06550: line 14, column 19:
PLS-00382: expression is of wrong type
ORA-06550: line 13, column 20:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
table of records
ORA-06550: line 13, column 20:
PLS-00382: expression is of wrong type
ORA-06550: line 14, column 19:
PL/SQL: ORA-22806: not an object or REF
ORA-06550: line 12, column 7:
PL/SQL: SQL Statement ignored
SQL> declare
2 cursor v_cur is select empno,ename,sal from emp where deptno = 10;
3 v_empno_tbl sys.OdciNumberList;
4 v_ename_tbl sys.OdciVarchar2List;
5 v_sal_tbl sys.OdciNumberList;
6 begin
7 select empno,ename,sal
8 bulk collect
9 into v_empno_tbl,v_ename_tbl,v_sal_tbl
10 from emp
11 where deptno = 10;
12 forall i in 1..v_empno_tbl.count
13 update tbl
14 set name = v_ename_tbl(i),
15 sal = v_sal_tbl(i);
16 end;
17 /
PL/SQL procedure successfully completed.
SQL> SY. -
Bulk Insert/Update in Toplink 10.1.3.
Hi experts,
I want to update a column in the database for all rows of a pariticular table. Also want the change to reflect in Toplink cache on all nodes of the Weblogic cluster. The cache in various nodes are synchronized using a JMS Topic. I want to avoid registering all these objects in the UnitOfWork
for performance reasons. The changes do not seem to propogate when I use other Bulk update methods. Is there a standard way of doing this?
Thanks,
KamalYou can update a set of rows using an Update All JPQL query in JPA, or using the native UpdateAllQuery class.
An Update All query will invalidate the local cache, but is not currently broadcast across cache coordination.
The Cache/IdentityMapAccessor invalidateObject() API allows an object to be invalidate across the cluster, but not a class or query set.
Please log a bug for this on EclipseLink, and contact Oracle technical support if you need a patch for this.
James : http://www.eclipselink.org -
Bulk Actions - Update not working
Hi -
If I try to run an update through Bulk Actions the resource fields are not getting updated
i.e.
command,user,accounts[Blah].lastname
CreateOrUpdate,AAC259,Bloggs
This doesn't update the resource, and it doesn't appear in the updates section of the view when debugging.
We're running IDM7 - anyone know what's causing this?
Cheers
--CalumI belive you should have to have a reosurce mapped in your commands.
Command,global.firstname,waveset.resources,.....
CreateOrUpdate,myfirstname,myResourceName
--sFed -
What workflow runs during bulk action 'update only Lighthouse' is ran?
Hello,
We were trying to set some deferred tasks via our bulk action list. To do this we are running Updates, and passing a parameter that is noticed by our custom Update WF. The custom update workflow then sets the deferred tasks. This takes a while to execute however, and we have about 30,000 accounts we want to update. To try and speed it up, we were playing around with the option to update only the lighthouse account. However, when this runs, it does not fire our custom Update WF. So.... What workflow is it firing off? Can this workflow be modified?
Thanks,
JimTry using the BPE workflow debugger to identify the workflow that runs. Set a break point in the user form assigned to the administrator you are running the bulk action as and proceed step by step. You should get the workflow being executed.
-
Bulk Metadata Update using RIDC API!!
Hi,
In my program I have to update bulk content's metadata value. For example I would like to update 'xComments' filed through my RIDC api call in java for 40K contents. In the last run the program was taking huge amount of time(in hours) to update the contents.
Could you please advise me any performence fine tuning or alternate operation to reduce the metadata update time. The java program runs in a single thread. Find the RIDC doc_update code below.
Code:
import oracle.stellent.ridc.*;
import oracle.stellent.ridc.model.*;
import oracle.stellent.ridc.protocol.*;
public class UpdateMetadata {
private static IdcContext userContext = null;
static public void main(String args[]) throws Exception {
System.out.println("RIDC - tests");
IdcClient idcClient = null;
try {
// Create the Manager
IdcClientManager manager = new IdcClientManager();
idcClient = manager.createClient("idc://<IPADDRESS>:4444");
IdcContext userContext = new IdcContext("sysadmin");
DataBinder binder = idcClient.createBinder();
binder.putLocal("IdcService", "UPDATE_DOCINFO");
for(int i=0;i<Resultset.size();i++)
binder.putLocal("dID", resultset.did);
binder.putLocal("dDocName", resultset.ContentID[i]);
binder.putLocal("xComments","****True****");
ServiceResponse response = idcClient.sendRequest(userContext,binder);
System.out.println("Processed");
DataResultSet resultSet = binders.getResultSet ("SearchResults");
// loop over the results
for (DataObject dataObject : resultSet.getRows ()) {
System.out.println ("Title is: " + dataObject.get ("dDocTitle"));
System.out.println ("Author is: " + dataObject.get ("dDocAuthor"));
} catch (IdcClientException ie) {
System.out.println("Exception while creating the client" + ie);The code seems to be incomplete:
- there are some variables like Resultset, resultset that are not initialized in the code (but probably are somewhere else)
- there is a line }*/, but no starting /* (could be corrupted by the editor)
Anyway, if you write that
In the last run the program was taking huge amount of time(in hours) to update the contents. I'd focus first for a question: what has changed? Btw. what is the expected (experienced?) result?
As for the code, it looks OK to me. You could probably throw away everything behind ServiceResponse response = idcClient.sendRequest(userContext,binder); but it seems the second loop is commented out anyway.
To reach better performance, you can certainly try to run the program in several threads - it will require some fine tuning to find out how many threads are optimal. For CHECKIN_NEW, we had the best performance with 3 threads in parallel, but it operated with 100-500kB files, so I'm not sure if these results are relevant also for UPDATE_INFO. I doubt, however, that improvements achieved with parallelism can be higher than let's say 2-3x faster. -
OIM 11g R2 - Bulk Catalog Update
Hi,
We have a requirement to write a custom scheduler which updates the entries in the catalog table.We have done it using CatalogService API but we might need to update millions of records during each run.Is there any recommended way to do this?.Thanks.It should support. Bulk load loads the data directly into database table using sql loader. So as long as you have UDF column in USR table and you have specified it in csv file, i believe it should work.
-
I need to perform bulk updates on my tables using SQL. The tables are really very big and most of updates occur on couple of million records. As such the process is time consuming and very slow. Is there anything I could do to fine tune these update statements? Please advise. Some of the same SQL statements I use are as follows
update test set gid=1 where gid is null and pid between 0 and 1;
update test set gid=2 where gid is null and pid between 1 and 5;
update test set gid=3 where gid is null and pid between 5 and 10;
update test set gid=4 where gid is null and pid between 10 and 15;
update test set gid=5 where gid is null and pid between 15 and 70;
update test set gid=6 where gid is null and pid between 70 and 100;
update test set gid=7 where gid is null and pid between 100 and 150;
update test set gid=8 where gid is null and pid between 150 and 200;
update test set gid=9 where gid is null and pid between 200 and 300;
Message was edited by:
user567669Indeed, check out the predicate:
SQL> explain plan for
2 select *
3 from emp
4 where sal between 1000 and 2000;
Explained.
SQL> @utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 185 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 5 | 185 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("SAL"<=2000 AND "SAL">=1000) -
Bulk Action - Update Only Lighthouse account
Does anybody know how to only update the internal Lighthouse account in the CSV data itself when loading via a Bulk Action when using the "Action=From Action List" drop down option?
I know you can select the "Only update the Identity system account" check box on the screen, but would like a way to add it to the CSV data as a backup in case the check box is accidentally left unchecked.
The documentation says to use the "waveset.resources" column, but if I set the data to "Lighthouse", it throws the following error. I've tried a few other values also, but none have worked.
com.waveset.exception.ItemNotFound: Resource:Lighthouse
=======================================
Options chosen on the Launch Bulk Actions page:
=======================================
Action = From Action List
Check boxes = All unchecked
Correlation Rule = User Name Matches AccountId
Get Action List From = File
*All others set to the default
==========
Data Format:
==========
command,user,accounts[Lighthouse].adminRoles,waveset.resources
===========
Sample Data:
===========
Update,10000009,|Remove|Helpdesk Admin,Lighthouse
Thanks.just remove waveset.resources, and Lighthouse from it....
It must work...
Thanks
Nsankar -
PSI Bulk Project update on Custom Field lookup tables.
Using PSI Is it possible to update project custom filed which has a lookup table. I tried but it throws the following exception
An unhandled exception of type 'System.Web.Services.Protocols.SoapException' occurred in System.Web.Services.dll
Additional information: ProjectServerError(s) LastError=CICOCheckedOutInOtherSession Instructions: Pass this into PSClientError constructor to access all error information
Any Idea on this.
VIKRAMHello,
If you are trying to update the lookup table values, see this PowerShell example:
https://gallery.technet.microsoft.com/Update-Server-Lookup-table-bb1ae14f
Paul
Paul Mather | Twitter |
http://pwmather.wordpress.com | CPS |
MVP | Downloads -
Bulk table update returning ORA-00001: unique constraint
I'm trying to update every record in a PROPERTY table that has a CLASS of either 1 or 9 and a STATUS of 'LHLD'. CLASS and STATUS descriptor records are in different tables to the PROPERTY table but reference the PROPERTY records by the PROPERTY tables unid.
I have wrote the following update command,
UPDATE RNT_PROPERTY_DESCRIPTOR SET DESCRIPTOR = 'PROP', DESCRIPTOR_VALUE = '1', EFFECT_DATE = '01-APR-04', USER_ID = 'USER'
WHERE RNT_PROPERTY_DESCRIPTOR.UNID IN (SELECT PROPERTY.UNID FROM PROPERTY, PROPERTY_CLASS_STATUS
WHERE PROPERTY_CLASS_STATUS.PROP_CLASS = '1'
OR PROPERTY_CLASS_STATUS .PROP_CLASS = '9'
AND PROPERTY.UNID IN (SELECT PROPERTY.UNID FROM PROPERTY, PROP_STATUS_HIST
WHERE PROP_STATUS_HIST.code = 'LHLD'));
However, after executing for around 10 mins the process update fails and the following error is returned:
ORA-00001: unique constraint (RNT_PROPERTY_DESCRIPTOR_IDX) violated
I know that the IDX suffix refers to the table INDEX but I'm not sure why I'm getting a key constraint, none of the colums that I'm trying to update must be unique.
For info the PROPERTY table has around 250,000 rows.
Any ideas? Is there an error in my update statement?
Thanks in advance.Gintsp,
can you explain a little more? I'm not sure what you are suggesting that I try.
Here is the output of what I have tried
SQL> UPDATE RNT_PROPERTY_DESCRIPTOR SET DESCRIPTOR = 'PROP', DESCRIPTOR_VALUE = '1', EFFECT_DATE = '01-APR-04', USER_ID = 'USER'
2 WHERE RNT_PROPERTY_DESCRIPTOR.UNID IN (SELECT PROPERTY.UNID FROM PROPERTY, PROPERTY_CLASS_STATUS
3 WHERE PROPERTY_CLASS_STATUS.PROP_CLASS = '1'
4 OR PROPERTY_CLASS_STATUS.PROP_CLASS = '9'
5 AND PROPERTY.UNID IN (SELECT PROPERTY.UNID FROM PROPERTY, PROP_STATUS_HIST
6 WHERE PROP_STATUS_HIST.CODE = 'LHLD'));
UPDATE RNT_PROPERTY_DESCRIPTOR SET DESCRIPTOR = 'PROP', DESCRIPTOR_VALUE = '1', EFFECT_DATE = '
ERROR at line 1:
ORA-00001: unique constraint (RNT_PROPERTY_DESCRIPTOR_IDX) violated
SQL> select owner, constraint_type, table_name, search_condition from user_constraints where constraint_name = 'RNT_PROPERTY_DESCRIPTOR_IDX';
no rows selected
The RNT_PROPERTY_DESCRIPTOR table structure is as follows:
Name Null? Type
UPRN NOT NULL NUMBER(7)
DESCRIPTOR NOT NULL VARCHAR2(4)
DESCRIPTOR_VALUE VARCHAR2(11)
EFFECT_DATE NOT NULL DATE
VALUE_DESCRIPTION VARCHAR2(35)
POINTS NUMBER(2)
POUNDS NUMBER(5,2)
SUPERSEDED VARCHAR2(1)
CURRENT_FLAG VARCHAR2(1)
FUTURE VARCHAR2(1)
END_EFFECT_DATE DATE
USER_ID NOT NULL VARCHAR2(10)
CREATE_DATE DATE
------------------------------------------------------------- -
Using Bulk operations for INSERT into destination table and delete from src
Hi,
Is there any way to expediate the process of data movement?
I have a source set of tables (with its pk-fk relations) and a destination set of tables.
Currently my code as of now, is pickin up the single record in cursor from the parentmost table, and then moving the data from other respecitve tables. But this is happening one by one... Is there any way I can make this take less time?
If I use bulk insert and collections, i will not be able to use the DELETE in the same block for same source record.
Thanks
Regards
AbhivyaktiAbhivyakti
I'm not 100% sure how your code flows from what you've stated, but generally you should try and avoid cursor FOR LOOPS and possibly BULK COLLECTING.
I always follow the sequence in terms of design:
1. Attempt to use bulk INSERTS, UPDATES and/or DELETES first and foremost. (include MERGE as well!)
2. If one cannot possibly do the above then USE BULK COLLECTIONS using a combination of RETURNING INTO's and
FORALL's.
However, before you follow this method and if you relatively new to Oracle PL/SQL,
share the reason you cannot follow the first method on this forum, and you're bound to find some
help with sticking to method one!
3. If method two is impossible, and there would have to be a seriously good reason for this, then follow the cursor FOR LOOP
method.
You can combine BULK COLLECTS with UPDATES and DELETES, but not with INSERTS
bulk collect into after insert ?
Another simple example of BULK COLLECTING
Re: Reading multiple table type objects returned
P; -
Transfer bulk data using replication
Hi,
We are having transactional replication setup between two database where one is publisher and other is subscriber. We are using push subscription for this setup.
Problem comes when we have a bulk data updates on the publisher. On publisher size the update command gets completed in 4 mins while the same takes approx 30 mins to reach on the subscriber side. We have tried customizing the different properties in Agent
Profile like MaxBatchSize, SubsriptionStreams etc but none of this is of any help. I have tried breaking the command and lot of per & comb but no success.
The data that we are dealing with is around 10 millions and our production environment is not able to handle this.
Please help. Thanks in advance!
SamagraHow is the production
publisher server
and subscriber server
configuration ? both are same ? How about the network bandwidth ? have you tried the same task with during working
hours and off hours ? I am thinking problem may be with network as well as both the server configuration..
If you are doing huge operating with replication this is always expected, either you should have that much configuration or you have divided the workload on your publisher server to avoid all these issues. Why can you split the transactions ?
Raju Rasagounder Sr MSSQL DBA -
LOOP inside FORALL in bulk binding
Can I use a loop inside forall in bulk bind updates?
as I understand, forall statement strictly loops through the length of the bulk limit size for the immediately following statement only.
I am attempting to use a loop there to update more than one table.
cursor c is select id from temp where dt_date > sysdate-30;
BEGIN
loop
fetch c into v_id;
limit 1000;
forall i in 1..v_id.count
UPDATE table_one set new_id = v_id(i);
exit when C%NOTFOUND;
end loop;
end;
I want to update another table table_two also immediately after updating table_one like this:
forall i in 1..v_id.count
UPDATE table_one set new_id = v_id(i);
BEGIN select nvl(code,'N/A') into v_code from T_CODES where ID_NO = v_id(i); EXCEPTION WHEN NO_DATA_FOUND v_code='N/A'; END;
UPDATE table_two set new_code =v_code;
exit when C% not found.
This is not working and when I run it, I get an error saying encountered BEGIN when one of the following is expected.
I got around this by having another FOR loop just to set up the values in another array variable and using that value in another second forall loop to update table_two.
Is there any way to do this multiple table udpates in bulkbinding under one forall loop that would enable to do some derivation/calculation if needed among variables [not array variables, regular datatype variables].
Can we have like
forall j in 1.. v_id.count
LOOP
update table 1;
derive values for updating table 2;
update table 2;
END LOOP;
Thank You.Well, without questioning reasions why do you want this, you are confusing bulk select and forall. You need:
begin
loop
fetch c bulk collect into v_id limit 1000;
exit when v_id.count = 0;
forall i in 1..v_id.count
UPDATE table_one set new_id = v_id(i);
end loop;
end;
/SY. -
How to decide the limit in bulk collect clause
Hi,
we have got a pl/sql application which is performing mass DML including bulk insert,update and merge over millions of data.Now i am little bit confused in deciding the LIMIT in bulk collect clause.is there any way from which i can decide the optimal limit for my bulk collect clause.and i want to know what are the key factors that affects the limit in bulk collect.
eargerly waiting for ur reply...
thanx
somyHello,
Check this example out and it might help you. All depends how much memory you want to allocate to do this job, you have to experiment to find optimal value (see memory consumption, speed of pl/sql block). There is no formula for finding optimal value as every system is configured differently, so once you have to see how is your oracle parameter (memory related ) configured and monitor system while this is running. I had used 500 for aroun 2.8 million rows.
DECLARE
TYPE array
IS
TABLE OF my_objects%ROWTYPE
INDEX BY BINARY_INTEGER;
data array;
errors NUMBER;
dml_errors exception;
error_count NUMBER := 0;
PRAGMA EXCEPTION_INIT (dml_errors, -24381);
CURSOR mycur
IS
SELECT *
FROM t;
BEGIN
OPEN mycur;
LOOP
FETCH mycur BULK COLLECT INTO data LIMIT 100;
BEGIN
FORALL i IN 1 .. data.COUNT
SAVE EXCEPTIONS
INSERT INTO my_new_objects
VALUES data (i);
EXCEPTION
WHEN dml_errors
THEN
errors := sql%BULK_EXCEPTIONS.COUNT;
error_count := error_count + errors;
FOR i IN 1 .. errors
LOOP
DBMS_OUTPUT.put_line( 'Error occurred during iteration '
|| sql%BULK_EXCEPTIONS(i).ERROR_INDEX
|| ' Oracle error is '
|| sql%BULK_EXCEPTIONS(i).ERROR_CODE);
END LOOP;
END;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE mycur;
DBMS_OUTPUT.put_line (error_count || ' total errors');
END;Regards
OrionNet
Edited by: OrionNet on Dec 17, 2008 12:55 AM
Maybe you are looking for
-
30 Day Free Trial Will Not Work on Vista
Anyone have any suggestions? I would like to try out Dreamweaver (I am a GoLive CS2 lover) and have downloaded Dreamweaver twice and loaded it twice and cannot get the 30 day free trial to load. I get an error on start up that says Adobe cannot open
-
How to rename a flat file concatenating date and time the file name?
I created a package where I run a first interface that uses a flat file from a server and load into a table in Teradata. After using the API OdiFtpPut, I used an FTP file and send to an outfit. Since this procedure will operate daily, I need at the t
-
Last night, a lowlife thief rifled through my car and stole my OLD iphone 3GS. The phone had been deactivated after I upgraded to my iPhone 4 several months ago. I suspect the battery was dead as well, but they stole my car charger, too. Here's my qu
-
I've got to setup a Small server (2008) but I'm trying to know how to do a few simple things. 1) The network has 2 Networks a 10.10.x.x network that's connected to a Static IP & DLS Line, and a 192.168.x.x network that's connected to a Fiber Line (Dy
-
Hey- I'm going to get a free dell from my mom tomorrow that I intend to sell for $400-500 tops. I'd like to take those proceeds and put them towards buying a refurbished Apple laptop. however, i'm torn between the refurbished powerbooks which are, sh