Bulk update of CLI Credentials in Prime 2.1
I have a number of devices in Prime 2.1 which use a managment TACACS account plus SNMP to talk with Prime. The managment TACACS account password has now been changed and thus Prime is not able to use this account to manage the devices. This there a way to update all of my devices in Prime with the new password at once?
Many thanks MIke
Hi Mike ,
Bulk update is unfortulately not possible ,there is an enhancement BUG for the same::
CSCuh80466 Bulk telnet/ssh credential update for wired devices not there in PI 2.0
You can try to do this via Template .
Design >Feature design >cli template > system template
select TACACS server template , edit it as per your need and click " save as new Template" and then you will find that save template under "My template" . try deploying it
Thanks-
Afroz
**Ratings Encourages Contributors ***
Similar Messages
-
Prime Infrastructure 2.0 "Wrong CLI Credentials" error with known good credentials
In the device work center sometimes devices show up with "wrong CLI credentials". Even when I change to known good SSH credentials and click the update & sync button the error does not go away.
Has anyone else had this issue? Does anyone know a workaround?
It seems absurd that you would not be able to edit the SSH credentials of devices.ok, tried all that was said here. Nothing worked ... I do have banners, but no # sign ... removed them anyways ... then thinking about the banners might be causing issues for what PI expects ... (i do have my prompts changed to mask the platform) ... so i defaulted back to regular prompts ... WORKED !!!!
So here is what works for me ... no banners, no custom prompts AND device added through the 'classic theme'.
I presume the expectation is that the device begins with minimal config ... the rest is pushed through the config templates deployment. But have the developers thought of existing devices ? is it IOS version related (target device) or simply a bug.
BTW, PI v2.1
Edit --- needing to clarify, for some models (namely UC520) ... removed banner, custom prompts and i could add it comfortably through the Lifecycle interface.
Others (3550), could be inserted easily with banners and custom prompts ... rather inconsistent, though at least , i have working recipes.
Thanks for the help all :) -
Prime shows wrong cli credentials but they are good
Hey!
Prime discovered both Nexus 7k Core Router. All fine...
On the first one Prime is getting all informations with entered CLI credentials in the device work centre.
On the second Nexus it always shows "wrong cli credentials" but they are 100% correct.
Login into the second Nexus via putty and the same credentials is also fine.
I can't convince Prime tocollect the Informations from the second Nexus.
Even ssh from prime cli to this device ist ok...
Any ideas?ok, tried all that was said here. Nothing worked ... I do have banners, but no # sign ... removed them anyways ... then thinking about the banners might be causing issues for what PI expects ... (i do have my prompts changed to mask the platform) ... so i defaulted back to regular prompts ... WORKED !!!!
So here is what works for me ... no banners, no custom prompts AND device added through the 'classic theme'.
I presume the expectation is that the device begins with minimal config ... the rest is pushed through the config templates deployment. But have the developers thought of existing devices ? is it IOS version related (target device) or simply a bug.
BTW, PI v2.1
Edit --- needing to clarify, for some models (namely UC520) ... removed banner, custom prompts and i could add it comfortably through the Lifecycle interface.
Others (3550), could be inserted easily with banners and custom prompts ... rather inconsistent, though at least , i have working recipes.
Thanks for the help all :) -
PI2.1:invalid cli credentials for password only login devices
Hi,
We've got in trouble with a LMS to PI Migration maybe because the target devices are configured for Telnet
with a password only Login in place of the the common username Password Login Dialog in this Network.
LMS seams to can handle a Password only Login CLI Dialog with all of the devices but Prime Infrastructure claims "invalid cli credentials" for most devices but it seams to work with a lower part of the devices.
What can we do that PI can handle CLI login with Password only Dialog for all devices? Customer dont want to reconfiger the Network devices for username Password Login.
SteffenHi Afroy,
Thx for your reply. Its not a question of wrong credentials. I tested exevry thing from ade+ commandline. I used a converted csv-export from LMS (where all devices was working). The problem is PI only related:
all devices in question are:
- SNMP reachable, because green check mark in device work center, but name and type is empty despite SNMP discovered infos and OID + name is written into the discovery=>reachable log if I try to discover this devices
- unmanaged, because claiming "wrong cli credential"
- ifm_inventory.log has some following dissent for all this devices:
postCollection() ... Cache status: false.
Error logging - Device <ip> Updated Successfully
delete and re-add this devices brings no improvement.
Steffen -
No Data Found Exception in bulk updates
I am trying to catch no data found exception in bulk updates when it does not find a record to update in the forall loop.
OPEN casualty;
LOOP
FETCH casulaty
BULK COLLECT INTO v_cas,v_adj,v_nbr
LIMIT 10000;
FORALL i IN 1..v_cas.count
UPDATE tpl_casualty
set casualty_amt = (select amt from tpl_adjustment where cas_adj = v_adj(i))
where cas_nbr = v_nbr(i);
EXCEPTION WHEN NO_DATA_FOUND THEN dbms_output.put_line('exception')
I get this error at the line where i have exception:
PLS-00103: Encountered the symbol "EXCEPTION" when expecting one of the following:
begin case declare end exit for goto if loop mod null pragma
raise return select update while with <an identifier>
<a double-quoted delimited-identifier> <a bind variable> <<
close current delete fetch lock insert open rollback
savepoint set sql execute commit forall merge pipe
Can someone pls direct me on how to get around this?
If I do not handle this exception, the script fails when it attempts to update a record that does not exist and the error says : no data found exception.
Thanks for your help.
Edited by: user8848256 on Nov 13, 2009 6:15 PMNo Data Found isn't an exception raised when an UPDATE cannot find any records to process.
SQL%ROWCOUNT can be used to determine the number of rows affected by an update statement, but if 0 rows are updated then no exception will be raised (it's just not how things work).
If you post your actual CURSOR (casualty) declaration, it's quite possible we can help you create a single SQL statement to meet your requirement (a single SQL will be faster than your current implementation).
Have you looked in to using the MERGE command? -
How can I do a bulk update?
Hi
In our BC4J application I need to perform a "bulk update", that is, I need to iterate over all the rows of a View and set one attribute on each one. Tracing the SQL session, I note that the BC4J framework is doing a select for update where pk = ... for each row. Is there something similar to the "executeQuery" view method, that does a "select for update"? something like an "executeQueryAndLockAllRows()?"
I'm using PESSIMISTIC locking mode.
Thanks,
RamiroHi
In our BC4J application I need to perform a "bulk update", that is, I need to iterate over all the rows of a View and set one attribute on each one. Tracing the SQL session, I note that the BC4J framework is doing a select for update where pk = ... for each row. Is there something similar to the "executeQuery" view method, that does a "select for update"? something like an "executeQueryAndLockAllRows()?"You can use batch-update features on an entity for update if your entity does not contain any refresh-on-update flags and any large data types (lobs). See help on Batch Update in the Tuning panel of an entity wizard/editor.
It will still execute select..for..update but the network roundtrip will occur in a batch rather than one roundtrip for every row leading to a much faster batch-update performance.
I'm using PESSIMISTIC locking mode.
Thanks,
Ramiro -
I've inherited quite a mess I'll admit -- I've got ~ 8000 pages each with different dreamweaver templates with the entire site being in a varying state of disrepair. I need to perform a global change -- I'm thinking the way to go about this is to update the templates (thre are ~40 of them, not nested) and let the process run through. However, I've encountered difficulties.
After about ~2300 files loaded into the site cache, dreamweaver crashes -- there is no error, it's an unhandled exception.... it consistently crashes at this point. I'm not sure if this is a specific page causing the problem, or if it's that I'm trying to load 8K files into the site cache.... So anyway, with it crashing consistently trying to build the site cache, I basically press "stop" whenever it tries, and that seems to abort the building and the 'update pages' screen comes up and tries to update the files.
My next problem is that there are countless errors in each of these pages and templates -- ranging from the 'template not found' when an old or outdated file is referencing a template that has been deleted -- to various mismatched head or body tags. Of course, and this is probably the most annoying thing I've ever encountered, this bulk process that should run over 1000s of files without interaction seems to feel the need to give me a modal alert for every single error. The process stops until I press 'OK'
I'm talking update 5-10 files, error... hit 'return', another 5-10 files are processed, another alert, hit 'return' -- rinse and repeat. Oh, and I made the mistake one time of hitting 'return' one too many times -- oh yes, this will STOP the current update because default focus is on the 'Stop' button, for whatever reason. and if I want to get the rest of the files, I need to run it again -- from the start.
Is there a way to silence these errors? They're already showing up in the log, I wouldn't mind going through it once the entire site has been udpated to clean thing up ... but I'm updating quite literally thousands of pages here, I would wager that 1/3 of them have some form of an error on it... do I really need to press "OK" two thousand times to do a bulk update with this program?
Any tips from the pros?This one might help.
Allow configuration of Automatic Updates in Windows 8 and Windows Server 2012
Regards, Dave Patrick ....
Microsoft Certified Professional
Microsoft MVP [Windows]
Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights. -
Bulk update height of Hyperlinks in a PDF document
When we export our InDesign document to PDF, hyperlinks are automatically generated for each item in the Table of Contents (TOC). This is great, but the problem is the hyperlink area of each line overlaps with the hyperlink area of the next line. This can create confusion in Adobe Reader and Apple Preview when a user clicks near the top or bottom of a TOC heading on a line, thinking it will take them to that heading in the document but instead they are taken to the preceeding or following heading in the document.
We currently have to open the PDF up in Adobe Acrobat and manually go through and reduce the height of every hyperlink in our document, particularly in the TOC (using the Link Tool). Is there anyway in InDesign to change the height of hyperlinks that are exported? If not, is there a way to bulk update the height of all the hyperlinks using Acrobat rather than doing each one individually/manually (maybe some sort of script)?
Many thanks : ) LeeYes, you can create a script in Acrobat that loops through the links and adjusts the link's "rect" property to suit your needs. You'll want to consult the Acrobat JavaScript Reference, and look at the doc.getLinks method in particular: http://livedocs.adobe.com/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/JS_API_AcroJS.88.479.html
along with the Link object properties: http://livedocs.adobe.com/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/JS_API_AcroJS.88.802.html
Post again if you get stuck.
I don't know what control you have over this on the InDesign side of things. You may want to ask in one of the InDesign forums if you haven't already. -
Regd bulk update of values in the table..
HI ,
I have a search page.. Have used autocustomization to create it ..This page will be used to query data from a table and then we also need to update couple of results table fields and save them.
THere is a results region . i have included the multiselect option of table which has got me a select column as the first column in the table. Also have included a tableaction and an upate button with that ..
Next to the table actions , Update button , I need to have a field , where in i can enter value and it shud update the updatable fields of rows in the table as bulk .. with the same value in the field next to update..
SOme what like batch update for the table with same values..
Could you please tell me hw do we do this ?
Regards,
PreetiHi,
As the update button is clicked then :
if(pageContext.getParameter("Update")!= null)
// Grab the value of the field next to update button
String value = pageContext.getParameter("<id of text input>");
//then loop through the rows
for ( int i=0;i<row.length;i++)
// then set the value of Attribute which you want
row.setAttribute("<Attribute name>",value);//if this Attribute is on a text input in table then automatically it will be reflected for all rows of the table. (Bulk Update)
Thanks,
Gaurav -
Problem in bulk update on partitioned table
Hi,
As per my old discussions on my huge table t_utr with 220 million rows,
I'm running a bulk update statement on the table which may update 10 to 10 million rows in a single update statement.
The problem is that when the statement has to update more number of rows, the update statement take more time.
Here I want to know, when a update statement has to update more rows, will it impact the performance?
Regards
Deepak> I'm running a bulk update statement on the table
which may update 10 to 10 million rows in a single
update statement.
Bulk updates does not make SQL statements execute any faster.
> The problem is that when the statement has to update
more number of rows, the update statement take more
time.
It is not a problem, but a fact.
> Here I want to know, when a update statement has to
update more rows, will it impact the performance?
You have a car capable of traveling 120km/h. You drive from point A to point B. These are 10 km apart. It takes 5 minutes.
Obviously when you travel from A to Z that are a 1000 km apart, it is going to take a lot longer than just 5 minutes.
Will updating more rows impact performance? No. Because you cannot compare the time it takes to travel from point A to B with the time it takes to travel from point A to Z. It does not make sense wanting to compare the two. Or thinking that a 1000km journey will be as fast to travel than a 10km journey.
Updating 10 rows cannot be compared to updating 10 million rows. Expecting a 10 million row update to be equivalent in "performance" to a 10 row update is ludicrous.
The correct question to ask is how to optimise a 10 million row update. The optimisation methods for a large update is obviously very different than those of a small update. E.g. 5 I/Os per row updated is insignificant when updating 10 rows. But is very significant when updating 10 million rows. -
Hi All,
I have a table which has around 50 million rows. I want to update a particular column for all the rows in the table based on some join conditons with other tables.
the conventional update method is taking too much time. no matter if i use a indexed column based update etc etc.. i came to know about BULK updates may be faster.
can anyone please help me on this? bulk update example code would be helpful.. any example document also would be better.The conventional syntax can sometimes force a nested-loop/filter plan:
UPDATE table1 t1
SET col1 =
( SELECT col2 FROM table2 t2
WHERE t2.x = t1.x );in which case updating a view may give you more flexibility and a more efficient join:
UPDATE
( SELECT t1.col1, t2.col2
FROM table1 t1
JOIN table2 t2 ON t2.x = t1.x
WHERE ... )
SET col1 = col2Have you also made sure you are only updating the rows you need? i.e. those that are different to the desired value (bearing in mind nulls etc).
Edited by: William Robertson on Sep 21, 2010 7:26 PM -
Hello Friends,
Can some one suggest me is that any possible bulk update query which consumes less timings?
Table - MyTable
id - PK.
orderid - Order Id.
Subid - Sub Id for an Order.
lineitem - LineItemId.
ProducId - Product Id.
Now i want to update the Subid to My Table for Every Order on Basis of LineItemId..
For Ex:
I will be having the records in MyTable as for a single Order...there can mutilple Subid's.
UPDATE MyTable SET subid = 1 WHERE orderid = 123 AND lineitem = 1;
UPDATE MyTable SET subid = 1 WHERE orderid = 123 AND lineitem = 2;
UPDATE MyTable SET subid = 5 WHERE orderid = 123 AND lineitem = 2000;
I worked out three scenarios as follows,
Case1:
UPDATE MyTable SET subid = 5 WHERE orderid = 123 AND lineitem = 2000;
Case2:
UPDATE MyTable SET subid = 1 WHERE orderid = 123 AND lineitem in(1,2,3.....1000);
UPDATE MyTable SET subid = 2 WHERE orderid = 123 AND lineitem in(1001,1002,.....1100);
Case3:
UPDATE MyTable SET subid= CASE WHEN lineitem = 1 THEN 1 WHEN lineitem = 2 THEN 2 .....WHEN 1000 THEN 1000 END WHERE orderid = 123;
Please suggest me which update consumes less time and helpful for updating more records nearly 5000 - 10000 at a single table.You are comparing three cases that are not equal to each other:
Case1:
UPDATE MyTable SET subid = 5 WHERE orderid = 123 AND lineitem = 2000;
Here you update the records with orderid = 123 and lineitem = 2000
Case2:
UPDATE MyTable SET subid = 1 WHERE orderid = 123 AND lineitem in(1,2,3.....1000);
UPDATE MyTable SET subid = 2 WHERE orderid = 123 AND lineitem in(1001,1002,.....1100);
This are multiple update statement to update all records with orderid = 123 and lineitems between 1 and 1100.
Case3:
UPDATE MyTable SET subid= CASE WHEN lineitem = 1 THEN 1 WHEN lineitem = 2 THEN 2 .....WHEN 1000 THEN 1000 END WHERE orderid = 123;
And here all records with orderid = 123, regardless of the lineitem are updated.
So my guess is that 1 will be the fastest as it is updating the least amount of records, followed by 2 and then 3. But it is a really weird comparison.
I think you'd better make up your mind first about which records need to be updated and how. And then it is best to use one update statement to do the job.
Regards,
Rob. -
Bulk Update Connected SharePoint Sites via powershell
Hello
Is there a way to Bulk Update Connected SharePoint Sites via powershell?
YasserSure you can, call the following PSI method from PowerShell passing in the correct parameter values:
http://msdn.microsoft.com/en-us/library/office/gg206217(v=office.15).aspx
Paul
Paul Mather | Twitter |
http://pwmather.wordpress.com | CPS -
Best practice: bulk update (inverse of REF CURSOR SELECT)??
To move data from the database to the application, there are REF CURSORS. However, there is no easy way to move updates/inserts from a dataset back to the database.
Could someone provide some guidelines or simple examples of how to do bulk updates (and I'm talking multiple columns for multiple rows).
I guess the way to go is arraybind. Are there any guidelines on how to handle them in .Net and PL/SQL ?You don't use the DECLARE keyword when defining stored procedures. The IS/ AS keyword is what you use instead.
CREATE OR REPLACE PROCEDURE TEST_REF
IS
TYPE REF_EMP IS REF CURSOR RETURN EMPLOYEES%ROWTYPE;
RF_EMP REF_EMP;
V_EMP EMPLOYEES%ROWTYPE;
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
OPEN RF_EMP FOR
SELECT *
FROM EMPLOYEES
WHERE EMPLOYEE_ID > 100;
FETCH RF_EMP INTO V_EMP;
DBMS_OUTPUT.PUT_LINE(V_EMP.FIRST_NAME || ' ' || V_EMP.LAST_NAME);
CLOSE RF_EMP;
EXCEPTION
WHEN OTHERS
THEN DBMS_OUTPUT.PUT_LINE(SQLERRM);
END TEST_REF;will compile. It seems a bit odd that you are opening a cursor and only fetching the first row from it. I would tend to suspect that you want to loop over every row that is returned.
Justin -
Hi,
We have the below code block to do one of our bulk updates. Its taking way too long to finish. its 10g 10.2.0.4 version.
can anyone pls suggest any other alternatives to make this code run faster ?
DECLARE
CURSOR s_cur IS
SELECT /*+ PARALLEL(item_dscr_copy_t 4) */ id.item_dscr_id,rdc.dscr_id
FROM rgn_t r
INNER JOIN item_t i
ON i.item_origin_rgn_id = r.rgn_id
INNER JOIN item_dscr_t id
ON i.item_rec_id = id.item_rec_id
INNER JOIN dscr_config_t dc
ON dc.dscr_id = id.dscr_id
AND dc.enty_dscr = 'ITEM'
INNER JOIN rgn_t eur
ON dc.rgn_id = eur.rgn_id
AND eur.rgn_mnm = 'EU'
INNER JOIN dscr_config_t rdc
ON rdc.dscr_name = dc.dscr_name
AND rdc.rgn_id = i.item_origin_rgn_id
AND rdc.enty_dscr = 'ITEM'
WHERE r.rgn_mnm LIKE 'EU%'
AND r.rgn_mnm != 'EU';
TYPE t_item_dscr_id IS TABLE OF item_dscr_t.item_dscr_id%TYPE;
TYPE t_dscr_id IS TABLE OF item_dscr_t.dscr_id%TYPE;
ar_item_dscr_id t_item_dscr_id;
ar_dscr_id t_dscr_id;
BEGIN
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO ar_item_dscr_id, ar_dscr_id LIMIT 10000;
FORALL i IN ar_item_dscr_id.FIRST .. ar_item_dscr_id.LAST
UPDATE item_dscr_copy_t
SET dscr_id = ar_dscr_id(i)
WHERE item_dscr_id = ar_item_dscr_id(i);
COMMIT;
EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
END;Hi,
did you try it without the hint? Are there maybe triggers on the table item_dscr_copy_t? How is the table item_dscr_copy_t indexed? And also the others?
Look at the first poster, and maybe you can give us more information.*yes i did. but didnt help. there are no triggers. pls find below regarding the index information of this table.
i am not able to generate the explain plan for this code block !
i tried to generate but its telling ORA-00905: missing keyword error.*
OWNER INDEX_NAME INDEX_TYPE TABLE_OWNER TABLE_NAME TABLE_TYPE UNIQUENESS COMPRESSION PREFIX_LENGTH TABLESPACE_NAME INI_TRANS MAX_TRANS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE PCT_THRESHOLD INCLUDE_COLUMN FREELISTS FREELIST_GROUPS PCT_FREE LOGGING BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS NUM_ROWS SAMPLE_SIZE LAST_ANALYZED DEGREE INSTANCES PARTITIONED TEMPORARY GENERATED SECONDARY BUFFER_POOL USER_STATS DURATION PCT_DIRECT_ACCESS ITYP_OWNER ITYP_NAME PARAMETERS GLOBAL_STATS DOMIDX_STATUS DOMIDX_OPSTATUS FUNCIDX_STATUS JOIN_INDEX IOT_REDUNDANT_PKEY_ELIM DROPPED
OGRDSTEST ITEM_DSCR_COPY_PK_IDX NORMAL OGRDSTEST ITEM_DSCR_COPY_T TABLE UNIQUE DISABLED TS_OGRDSTEST 2 255 65536 1 2147483645 10 YES 3 574360 248371394 1 1 4075325 VALID 248371394 471350 9/22/2010 11:23:33 PM 1 1 NO N N N DEFAULT NO YES NO NO NO
OGRDSTEST ITEM_DSCR_COPY_AK1_IDX NORMAL OGRDSTEST ITEM_DSCR_COPY_T TABLE UNIQUE DISABLED TS_OGRDSTEST 2 255 65536 1 2147483645 10 YES 3 899469 253494137 1 1 229355809 VALID 253494137 313109 9/22/2010 11:23:52 PM 1 1 NO N N N DEFAULT NO YES NO NO NO
OGRDSTEST ITEM_DSCR_COPY_IE2_IDX NORMAL OGRDSTEST ITEM_DSCR_COPY_T TABLE NONUNIQUE DISABLED TS_OGRDSTEST 2 255 65536 1 2147483645 10 YES 3 765890 132524340 1 1 247182885 VALID 275152467 433984 9/22/2010 11:24:23 PM 1 1 NO N N N DEFAULT NO YES NO NO NO
OGRDSTEST ITEM_DSCR_COPY_IE3_IDX NORMAL OGRDSTEST ITEM_DSCR_COPY_T TABLE NONUNIQUE DISABLED TS_OGRDSTEST 2 255 65536 1 2147483645 10 YES 3 1571028 48695047 1 3 175338510 VALID 268330841 201031 9/22/2010 11:25:12 PM 1 1 NO N N N DEFAULT NO YES NO NO NO
OGRDSTEST ITEM_DSCR_COPY_IE4_IDX NORMAL OGRDSTEST ITEM_DSCR_COPY_T TABLE NONUNIQUE DISABLED TS_OGRDSTEST 2 255 65536 1 2147483645 10 YES 3 656319 16631783 1 2 44415572 VALID 277273348 514143 9/22/2010 11:25:46 PM 1 1 NO N N N DEFAULT NO YES NO NO NO
OGRDSTEST CTXT_ITEM_DSCR_CP_TXT_IDX DOMAIN OGRDSTEST ITEM_DSCR_COPY_T TABLE NONUNIQUE DISABLED 0 0 0 YES VALID 9/22/2010 11:25:47 PM 1 1 NO N N N NO CTXSYS CONTEXT STOPLIST CTXSYS.EMPTY_STOPLIST NO VALID VALID NO NO NO
Maybe you are looking for
-
Is anyone else having a RAM issue? Anyone know if i can re-install mountain lion?
After istalling Mountain Lion, i noticed things take a bit longer to carry out for example i noticed a lot more lag (not a whole lot) but I never had this probelm with Lion when powering up and having the toolbar and dock show up, which takes around
-
NEW IPOD NANO NOT WORKING, PLEASE HELP
Hi Guys, I have the large IPOD (4 YEARS OLD) and bought a 2nd Ipod Nano that's smaller for the gym. I plugged it into my MAC BOOK PRO and it keeps saying it needs to be updated which I press "OK" and then it asks for my password and when I type it in
-
Why can't I drag files out of Photoshop CC to another program?
Hello, I have a smart object in Photoshop that I want to drag into InDesign, however when I drag the file onto InDesign's icon in the dock it does nothing. This has worked for me in the past. It won't drag into Illustrator either. Has anyone else exp
-
Hi, I would like to target the U.S. with a new site but I'm not sure about the best way to do it. I'm hoping it is as simple as changing the country and culture within settings. And do I need separate sites if I'm are targeting different countries?
-
If I delete an album does it delete the photos in the album?
If I delete an album does it delete the photos in the album?