Storage of items
Where are the slideshows, cards, calendars, and books stored?
I found the photos in "~/Pictures/iPhoto Library" but can't seem to find anything else.
I vaguely remember that you can print a copy of the pdf made for a book, so I would like to know where it is. And how to do it.
Is it also possible to print the cards and calendars?
I created a slideshow and exported that. At least I was able to choose where to save that.
Judy:
If you want a pdf copy of either a book, card or calendar they just start the print process and select Save as PDF under the PDF menu in the print window.
If you have trouble printing a card in register, front to back, so they line up properly read Tutorial #8, that is if you have a 3rd party editor that can use layers like Photoshop Elements for Mac.
Do you Twango?
TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
I've written an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
Similar Messages
-
Problem with Duplicate Movie Clip which is tracked in an Array
I have a problem with my inventory code using Shared Objects.
What i did was to track the collected items in an array. Each time
the user collects an item, the original iconMC is duplicated and
loads a picture of the item (function loadImage). However, my
problem is that this retains the original array so even if I
already added new items into the inventory, the newest item doesn't
show. original array length = 2, new array length = 3; the last
item doesn't show. If I reload my flash, its the only time that
this shows. Now if I get an item and lessen the inventory, it still
retains the original length and doubles (or triples, depending on
how many items i lessened) the last item.
var so:SharedObject = SharedObject.getLocal("lakbayUser",
var i:Number = so.data.currentUserIndex;
var ctr = 0;
var iconCtr;
var iconArray: Array; //temporary storage of items placed
inside icon
function selectIcon(num){
eval("iconMC"+num)._alpha = 0;
so.data.users
[15][0] = true;
so.data.users[15][1] = iconArray
//loads the next icon
function setNextIconMC(itemCtr, iconCtr){
if(iconCtr > 0) {
newName = "iconMC" + iconCtr;
newPos = 130 * iconCtr;
_root.itemaHolderMC.iconHolderMC.iconMC0.duplicateMovieClip(newName,iconCtr+1);//does
not duplcate if unloadMovie is called
this[newName]._x = newPos;
this["iconMC"+iconCtr].enabled = true;
this["iconMC"+iconCtr]._alpha = 100;
loadMovie("gamit/icon"+itemCtr+".jpg",_root.itemaHolderMC.iconHolderMC[newName]);
trace(_root.itemaHolderMC.iconHolderMC[newName]);
else {
iconMC0.enabled = true;
iconMC0._alpha = 100;
loadMovie("gamit/icon"+itemCtr+".jpg",_root.itemaHolderMC.iconHolderMC.iconMC0);
//loads all the icons into the container
function loadImage(){
iconCtr=0;
iconArray = new Array();
for(itemCtr = 0; itemCtr < so.data.users
[14].length; itemCtr++){
if(so.data.users[14][itemCtr] == true){
iconArray[iconCtr] = itemCtr;
setNextIconMC(itemCtr,iconCtr);
iconCtr++;
if(iconCtr==0){
iconMC0._visible = false;
else iconMC0._visible = true;
loadImage();
so.flush();
stop();hey there yenniie - each time you update the array you must
also call 'flush' to update the SharedObject. also please use the
'attach code' button to post code, as you can see what happened in
your post with the ' i ' value. So assuming I'm interpreting the
the missing characters correctly the method should be: -
What is the best way to backup four Lion workstations
I have several Mac OS Lion Workstations and have used Retrospect in the past to backup, and (with poor success) restore missing files and complete drives. I had a recent failures of this concept/system and would like some advice about what can work better.
My (client) Mac Systems all on Lion 10.7.1 unless noted otherwise:
MacPro (1,1) with two internal 500GB, 250GB HD
MacBook (2,1) 500GB HD
MacBook Pro (6,2) internal 500GB HD
MacMini (3,1) <- runs OS X10.6.8 at this time; internal 180GB, 1.0TB IEEE FW800 attached HD of music, movies and documents set up to serve to Apple TV.
Currently available storage for backup:
JBOD disks: 2x1.5TB, 1.0T
GigE network wired connections
Apple Airport Extreme wireless network 802.11b/g/n
Currently running backup applications:
Time Machine using the JBOD disks to backup entire client machine
Parallels virtual media cloning (CCC)
Applications cloning (CCC) on each system
Purpose/needs of a unified network backup:
Maintain an incremental backup using minimal storage to quickly retrieve missing files for a minimum of 30 days prior to a loss by error or failure of the internal connected storage system.
Maintain a backup of entire drive snapshot in the event of disasterous failure of the entire system, or cloning a complete system onto a new / replacement machine, with or without a OS installed.
Maintain (possibly) separate storage of a virtual OS- Parallels 6, as the incremental backup is by design limited to a clone of the virtual machine.
Maintain a separate storage of items like applications so that space on the incremental backup can be conserved.When I was using 10.4.1, which was years ago, I used SuperDuper.
Allan -
15-inch MacBook Pro w/ Retina Custom Build Delay
On 11/25 I ordered a custom build 15-inch MacBook Pro with Retina display 2.6GHz and 1TB PCIe-based Flash Storage.
Item still has not been shipped any idea as to the hold up? Is it the Flash storage?
Be intereseted in hearing from others about their backorder situation.
ThanksThanks nbar, wish I could it was ordered as a lease through my company's supplier SHI and their 3rd party apple company, twice removed from me. SHI says they have no way of giving me the apple order number which I find hard to believe. All the other apple accessories have arrived.
-
LSMW Upload SalesOrder Creation:problem in assigning Partner type SH and SP
Hi
I'm trying to upload sales order creation data using LSMW -BAPI Method
Business Object : BUS2032
Method : CREATEFROMDAT2
Message type : SALESORDER_CREATEFROMDAT2
Basic Type : SALESORDER_CREATEFROMDAT202
and I'm passing the following header data
Sales Order Type, Sales organisation, Distribution Channel, Division, Sold To Party, Ship To party,
Purchase order number, PO Date, Requested delivery date, Order Reason, Payment terms, Incoterms part1, Incoterms part2, Document Currency.
and the following item data
MATERIAL NUMBER
Order quantity
Storage Location
Item Category
Item Usage
Reason for Rejection
Plant
Net Weight
Gross Weight
Condition Type
Amount
Internal Order Number
I'm assigning the header data to structure <b>E1BPSDHD1</b>
and Item data to <b>E1BPSDITM,E1BPSDITM1</b>
and Partner data to structure <b>E1BPPARNR</b>.
When I am assigning Partner data to the structure E1BPPARNR, I want to assign both <b>sold-to-party</b> and <b>ship-to-party</b> ,(because I have two source fields of this type) but there is only target field related to the partner data , here I am assigning <b>partner type as SP</b> and <b>partner number as sold-to-party</b>, still there is an unassigned field ship-to-party, for this field I am unable to find a relevant target field so please help me how can I assing These two flat file fields <b>(sold-to-party,ship-to-party)</b>
Looking for further more information : if there is many sold-to-party's and many ship-to-party's how can I go that in this situation i.e maintaining Many to Many relationship using LSMW tool
Thanks in advance
regards
RajasekharHere is what you have to do.
In field mapping, double click on the field PARTN_NUMB (or any field of that structure) in change mode. This opens up the code editor. There just enter the following code. I am assuming you are doing only these fields. But if you are mapping more fields of this structure, you have to map them here.
E1BPPARNR-PARTN_NUMB = ORDERHEADER-KUNAG.
E1BPPARNR-PARTN_ROLE = 'AG'.
E1BPPARNR-ITM_NUMBER = '000000'.
*-- add more field mappings here, if needed
TRANSFER_RECORD.
*-- Now pass the Ship-to record
E1BPPARNR-PARTN_NUMB = ORDERHEADER-KUNWE.
E1BPPARNR-PARTN_ROLE = 'WE'.
E1BPPARNR-ITM_NUMBER = '000000'.
*-- Add more partners if needed by copying the above code.
Remember, you need to do TRANSFER_RECORD only that many times as you have the partners. There will be one 'TRANSFER_RECORD' at the end of this structure, so keep that in mind.
Srinivas -
Hi,
I want to update profit center while doing goods receipt in MIGO transaction.
I tried using method LINE_MODIFY from BADI, MB_MIGO_BADI but profit center field is not getting updated because it is not inputable field.
Also, I checked BADI, MB_MIGO_ITEM_BADI. It updates storage location & item text.
Can anyone suggest me any user exit or BADI which will be useful for my requirement?
Thanks & Regards,
SupriyaHi,
I am stuck up with the problem of changing Cost center in MIGO. Very similar to the above problem.
Is there any solution for this.
Thanks in advance,
Vishnu Priya -
Hi All,
I have a mapping scenario like below,
Source Data
Storage Location1
ItemA
ItemB
ItemC
Storage Location2
ItemA
ItemD
ItemE
Storage Location3
ItemD
ItemE
Target Data
ItemA
StorageLocation1
StorageLocation2
ItemB
StorageLocation1
ItemC
StorageLocation1
ItemD
StorageLocation2
StorageLocation3
ItemE
StorageLocation2
StorageLocation3
I have to convert the source data into the target data given. I come to know it can be done with ABAP mapping....There are actually 2 loops like this in my mapping scenario...so donno how to do this mapping...do any one have this kind of mapping scenario ....do suggest me...
Pls. let me know if this is not clear.
Thanks
Giridhar KommisettyHI
Sorry for the confusion.
Let me explain again.
Source structure is defined with StorageLocaiton as segment1 and then Item as a subsegement underthat.
In the target structure I have to make Item segment as a header kind of thing and Storagelocation under that.
just a reverse kind of thing. main segement to subsegment and subsegment to mainsegment.
hope i made sense.
thanks,
Giridhar Kommisetty -
Need clarification on AT LINE SELECTION & AT USER COMMAND
Hi all,
can we use AT LINE SELECTION and AT USER COMMAND events in the same report? If yes what r the precautions that we have to take?
Thanks in advance
venkatHi Venkat,
I had written this code while I was learning Menu Painter. It will help help you.
*& Report YTEST_MENUPAINTER *
REPORT ztest.
*Consider a scenario when the user asks for Material Details(Table : MARA )
*displayed in one List and based on the Material selected he wants the corresponding
*Storage Location Data for that Material (Table : MARD ).
TABLES : mara.
TYPES : BEGIN OF tp_mara,
matnr TYPE mara-matnr,
mtart TYPE mara-mtart,
mbrsh TYPE mara-mbrsh,
matkl TYPE mara-matkl,
END OF tp_mara.
TYPES : BEGIN OF tp_marc,
matnr TYPE marc-matnr,
werks TYPE marc-werks,
pstat TYPE marc-pstat,
ekgrp TYPE marc-ekgrp,
dispr TYPE marc-dispr,
END OF tp_marc.
TYPES : BEGIN OF tp_mard,
matnr TYPE mard-matnr,
werks TYPE mard-werks,
lgort TYPE mard-lgort,
lfgja TYPE mard-lfgja,
labst TYPE mard-labst,
umlme TYPE mard-umlme,
END OF tp_mard.
DATA : t_mara TYPE STANDARD TABLE OF tp_mara,
t_marc TYPE STANDARD TABLE OF tp_marc,
t_mard TYPE STANDARD TABLE OF tp_mard,
wa_mara TYPE tp_mara,
wa_marc TYPE tp_marc,
wa_mard TYPE tp_mard.
DATA : w_werks TYPE werks .
DATA : itab TYPE TABLE OF sy-ucomm.
START-OF-SELECTION.
*Collecting the material details form Table MARA
SELECT matnr
mtart
mbrsh
matkl
FROM mara
INTO TABLE t_mara
UP TO 200 ROWS.
END-OF-SELECTION.
SET PF-STATUS 'DETAIL'.
*Now I am Dispalying the Material Details in the Primary List
CLEAR wa_mara.
LOOP AT t_mara INTO wa_mara.
IF sy-tabix EQ 1.
FORMAT INTENSIFIED ON.
FORMAT COLOR COL_KEY.
WRITE : /5(16) 'Material Number'.
FORMAT COLOR COL_NORMAL.
WRITE : 24(15) 'Material Type',
40(18) 'Industry Sector',
58(18) 'Material Group' .
ENDIF.
FORMAT INTENSIFIED OFF.
FORMAT COLOR COL_KEY.
WRITE : /5(16) wa_mara-matnr.
FORMAT COLOR COL_NORMAL.
WRITE : 24(15) wa_mara-mtart,
40(18) wa_mara-mbrsh,
58(18) wa_mara-matkl.
*You can assume some sort of buffer is created in the memory and the values of
* wa_mara-matnr are put into it when you use the HIDE command
HIDE wa_mara-matnr.
ENDLOOP.
*Now when user Double clicks a line (AT LINE-SELECTION event is trigerred) and
*the line contents of the line selected and the contents buffered using
*command interact and the value for the hidden variable is got into the variable
*refrenced using the HIDE command i.e.. wa_mara-matnr in our case
AT LINE-SELECTION.
IF sy-lsind = 1.
FORMAT INTENSIFIED ON.
WRITE: 'Plant Data for Material ' COLOR COL_NORMAL,
35 wa_mara-matnr COLOR COL_TOTAL.
REFRESH t_marc.
* Now I have the value of the Material in my hidden variable wa_mara-matnr
* Based on this I am selecting the Storage Location Data
SELECT matnr
werks
pstat
ekgrp
dispr
FROM marc
INTO TABLE t_marc
WHERE matnr = wa_mara-matnr.
CLEAR wa_marc.
FORMAT INTENSIFIED OFF.
FORMAT COLOR COL_NORMAL.
LOOP AT t_marc INTO wa_marc.
IF sy-tabix EQ 1.
FORMAT INTENSIFIED ON.
FORMAT COLOR COL_NORMAL.
WRITE : /24(6) 'Plant',
30(22) 'Maintenance status',
52(20) 'Purchasing Group',
72(27) 'Material: MRP profile'.
ENDIF.
WRITE : /24(6) wa_marc-werks,
30(22) wa_marc-pstat,
52(20) wa_marc-ekgrp,
72(27) wa_marc-dispr.
CLEAR wa_marc.
ENDLOOP.
SKIP 5.
FORMAT INTENSIFIED ON.
WRITE: 'Storage Data for Material ' COLOR COL_NORMAL,
35 wa_mara-matnr COLOR COL_TOTAL.
REFRESH t_mard.
SELECT matnr
werks
lgort
lfgja
labst
umlme
FROM mard
INTO TABLE t_mard
WHERE matnr = wa_mara-matnr.
CLEAR wa_mard.
FORMAT COLOR COL_NORMAL.
* Display the Storage Location Data in the Secondary List
LOOP AT t_mard INTO wa_mard.
IF sy-tabix EQ 1.
FORMAT INTENSIFIED ON.
FORMAT COLOR COL_NORMAL.
WRITE : /24(6) 'Plant',
30(20) 'Storage Location',
50(12) 'Fiscal Year',
62(15) 'Valuated stock',
77(20) 'Stock in transfer'.
ENDIF.
WRITE : /24(6) wa_mard-werks,
30(20) wa_mard-lgort,
50(12) wa_mard-lfgja,
62(15) wa_mard-labst,
77(20) wa_mard-labst.
ENDLOOP.
ENDIF.
AT USER-COMMAND.
CASE sy-ucomm.
WHEN 'PLANT'.
REFRESH itab. CLEAR itab.
APPEND 'PLANT' TO itab.
APPEND 'STORAGE' TO itab.
SET PF-STATUS 'DETAIL' EXCLUDING itab .
FORMAT INTENSIFIED ON.
WRITE: 'Plant Data for Material ' COLOR COL_NORMAL,
35 wa_mara-matnr COLOR COL_TOTAL.
REFRESH t_marc.
SELECT matnr
werks
pstat
ekgrp
dispr
FROM marc
INTO TABLE t_marc
WHERE matnr = wa_mara-matnr.
CLEAR wa_marc.
FORMAT INTENSIFIED OFF.
FORMAT COLOR COL_NORMAL.
LOOP AT t_marc INTO wa_marc.
IF sy-tabix EQ 1.
FORMAT INTENSIFIED ON.
FORMAT COLOR COL_NORMAL.
WRITE : /24(6) 'Plant',
30(22) 'Maintenance status',
52(20) 'Purchasing Group',
72(27) 'Material: MRP profile'.
ENDIF.
WRITE : /24(6) wa_marc-werks,
30(22) wa_marc-pstat,
52(20) wa_marc-ekgrp,
72(27) wa_marc-dispr.
CLEAR wa_marc.
ENDLOOP.
WHEN 'STORAGE'.
REFRESH itab. CLEAR itab.
APPEND 'PLANT' TO itab.
APPEND 'STORAGE' TO itab.
SET PF-STATUS 'DETAIL' EXCLUDING itab .
FORMAT INTENSIFIED ON.
WRITE: 'Storage Data for Material ' COLOR COL_NORMAL,
35 wa_mara-matnr COLOR COL_TOTAL.
REFRESH t_mard.
SELECT matnr
werks
lgort
lfgja
labst
umlme
FROM mard
INTO TABLE t_mard
WHERE matnr = wa_mara-matnr.
CLEAR wa_mard.
FORMAT COLOR COL_NORMAL.
LOOP AT t_mard INTO wa_mard.
IF sy-tabix EQ 1.
FORMAT INTENSIFIED ON.
FORMAT COLOR COL_NORMAL.
WRITE : /24(6) 'Plant',
30(20) 'Storage Location',
50(12) 'Fiscal Year',
62(15) 'Valuated stock',
77(20) 'Stock in transfer'.
ENDIF.
WRITE : /24(6) wa_mard-werks,
30(20) wa_mard-lgort,
50(12) wa_mard-lfgja,
62(15) wa_mard-labst,
77(20) wa_mard-labst.
ENDLOOP.
ENDCASE.
My SE41 settings are.
Application toolbar Test for Material Detail Display
Items 1 - 7 STORAGE PLANT
STORAG PLANT
Items 8 - 14
Items 15 - 21
Items 22 - 28
Items 29 - 35
Function keys Test for Material Detail Display
Standard Toolbar
SAVE BACK EXIT CANCEL PRINT FIND FIND NEXT
Recommended function key settings
F2 PICK Choose
F9 <..> Select
Shift-F2 <..> Delete
Shift-F4 <..> Save without check
Shift-F5 <..> Other <object>
Freely assigned function keys
F5 STORAGE STORAGE
F6 PLANT PLANT
F7
F8
Shift-F1
Hope this will help you.
Regards,
Arun Sambargi.
Message was edited by: Arun Sambargi -
When I save a document, it does not show up in Documents for a long time. I can use Finder to find and open the document if I can remember what I named it. Sometimes I remember the subject, but I can't remember the name I gave it, but it usually shows up in Documents in a few weeks. I don't remember having this problem when the computer was new. Is this a bug in a recent OS update? The information I can find on saving documents says that they should show up in Documents, but it doesn't say how long it should take. Do I have any control over how long it takes?
Can you save a document to a folder not in the Documents folder?
Funny thing, I've owned dozens of Macs and never use that folder.
Usually, you can choose a folder or path to have items saved there.
In fact, since I am often the only user of a computer and sometimes
have two user accounts, still do not use these user-account specialty
folders. I've often did as I had said, in creating a folder on the hard
disk drive and an alias to it on my desktop, or drag a link/icon into
the Dock (next to the Trash in Dock is an easy location to put it.)
Sounds as though there still may be some issues in the computer's
system, regarding how your created files are handled. You could
boot the computer from the Tiger install disc and run Disk Utility from
the version found in the Installer's menu bar (do not run the install
sequence, the finder-like Installer menu bar has options in a drop-
down menu, like finder does; you can launch Disk Utility and do other
jobs from the Installer, without installing anything.
If using the booted Tiger installer's Disk Utility, be sure to avoid any
of the extra features this utility can do in this situation; it can do a
fair amount of damages should you do more than 'repair disk' and
'repair disk permissions' out of turn. It can reformat, erase, partition.
The 9GB of remaining space in the hard disk drive may become a
marginal asset, since up to 5GB of the hard drive can be used for
the System and Apps, as Virtual Memory and temp swap files. That
would leave a dangerously low amount of 'free space' overall.
My computer drives have about 70% free space, except in my iBook,
and that has about 60% free space since it has extra spare installer
.dmg files in a folder there. I try and keep a good backup clone on an
externally enclosed FireWire bootable hard disk drive, of each Mac HD.
(Your Mini is probably an Intel-based Mac; so a backup external enclosure
for a hard disk drive for clones & backup would likely be USB2.0; if you
get one, be sure the vendor knows Macs. Get one that can support clones
and booting OS X from a drive in such an enclosure. If the Mini has ports
for FireWire, then get an external drive enclosure with FW & USB2.0.)
For general periodic maintenance, I have and use OnyX, so about once
a month or so (if bored, sooner; if forgetful, later) and have OnyX set in
its own preference setting, to restart the computer automatically when it
is done running any task it would require or recommend a restart when
done; so it does that. I can leave it after launching the selected choices
in Automation (checkboxed items can be chosen) and usually choose All.
{This does not mean it will start & run all by itself; I don't know if it can do
that aspect of scheduled maintenance, since this requires an Admin PW
to launch; so you'd have to be there to run OnyX, or remote login to do it.}
• OnyX - Titanium Software:
http://www.titanium.free.fr/pgs/english/apps.html
This can take some time, depending on the size of the hard disk drive
and other particular items in each machine's configuration. Maybe up
to 45 minutes on a drive if not too full. Could take longer on yours.
At some point, given the age of your computer and the fullness of the
capacity of the hard disk drive, you may have to consider the wear &
age of the moving parts sufficient reason to replace the hard disk drive.
If you do have or are considering a complete backup of the computer's
contents, there are external drive enclosures which can support a full
computer clone (free-running clone utilities are downloadable) and you
could backup the computer's contents; and a good boot-able clone is
about the best assurance you can still access your data if the main drive
in the computer should fail. You can make & test a clone before any
disk maintenance, reinstall from scratch, or an old drive gets replaced.
The time probably has come to consider these ideas and learn more.
And an all new installation on a low-level erased & reformatted hard
disk drive, and updated to the last Tiger 10.4.11 combo; would be a
basis for a cleaner and faster running system. Then, the last security
update, java updates, quicktime, browsers, Flash & Shockwave players,
could be applied; and only the apps you use most, reinstalled & updated.
Even without a new internal hard disk drive, could be a great refresh.
An external drive could be a home for more than just a bootable clone of
the computer's current content, a backup; the external can be partitioned
and more than one clone or different system version could be on there;
or a partition used for storage of items to be used alongside the Mac.
iTunes and iPhoto libraries can be relocated or moved to an external; so
as to free-up the internal drive. Generally, to keep it from being too full.
And I suspect your current hard drive status may be marginal, in that little
free or unused space exists; and if needed, you could not perform an
'Archive & Install/update' procedure or other helpful major tasks within it.
Anyway, I do not have a specific answer as to why the files you have
saved to the Documents folder are not going there. They should be
appearing in your User folder, since all your user activity when booted
in and logged-in a user account would go in there; generally speaking.
You should be able to search for a folder you made, by date, and see
if it still exists or if it was overwritten by something else & is missing.
The best cure to most OS X issues is preventative maintenance; but
in some (few) cases, in a newer and unproven system, other issues
may be the cause of some odd problems. But OS X 10.4.11 is very
stable and if kept up, probably the best system version so far; in the
comparison of non-current systems.
Good luck & happy computing! -
Partition X does not exist at PartitionSplittingBackingMap
Hi Guys,
I recently upgraded to Coherence 3.5 and I now seem to regularly get errors similar to below when starting the cluster followed by node death.
Any ideas what the cause might be?
2009-10-22 15:12:17,331 ERROR lccohd1-2 1.7.596 Log4j [Logger@9236976 3.5.1/461p2] - 46.747 <Error> (thread=DistributedCache, member=2):
java.lang.IllegalStateException: Partition 45 does not exist at PartitionSplittingBackingMap{Name=tradeoverview$Backup,Partitions=[63,128,165,166,167,168,169,170,192,193,194,195,196,197,198,199,200,201,202,203,]}
at com.tangosol.net.partition.PartitionSplittingBackingMap.reportMissingPartition(PartitionSplittingBackingMap.java:566)
at com.tangosol.net.partition.PartitionSplittingBackingMap.putAll(PartitionSplittingBackingMap.java:161)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:132)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:619)
Thanks, PaulHi Paul,
pmackin wrote:
1) How many members are running, including the one that won't start.This is happening randomly in integration which has 2 machines each with 6 members (4 storage enabled, 2 disabled).
pmackin wrote:
2) Are you running the same version of coherence for all members? If not, what versions are running.All members are the same version - 3.5.1/461p2
Thanks, Paul
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "dtd/cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>cache-control</cache-name>
<scheme-name>distributed-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>event-registration-cache</cache-name>
<scheme-name>replicated-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>reference-data-*</cache-name>
<scheme-name>replicated-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>timeseries-*</cache-name>
<scheme-name>distributed-timeseries-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>distributed-timeseries-*</cache-name>
<scheme-name>distributed-timeseries-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>replicated-identifiable-*</cache-name>
<scheme-name>replicated-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>distributed-identifiable-*</cache-name>
<scheme-name>distributed-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>distributed-token-*</cache-name>
<scheme-name>distributed-token-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>token-*</cache-name>
<scheme-name>distributed-token-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>single-start-services</cache-name>
<scheme-name>replicated-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>event-registration-cache</cache-name>
<scheme-name>replicated-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>order</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>OrderCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>execution</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>ExecutionCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>tradeoverview</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>TradeOverviewCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>tradeoverview-latest</cache-name>
<scheme-name>distributed-identifiable-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>calculators</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>CompositeCalculatorCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>instrumentstatistics</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>InstrumentStatisticsCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>eclipsesequence</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>EclipseSequenceCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>eodrecord</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>EodRecordCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>auditrecord</cache-name>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cachestore-name</param-name>
<param-value>AuditRecordCacheStore</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>loggedinusers</cache-name>
<scheme-name>distributed-identifiable-evict-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cache-scheme-name</param-name>
<param-value>eventsource-local-scheme</param-value>
</init-param>
<init-param>
<param-name>flush-delay</param-name>
<param-value>10s</param-value>
</init-param>
<init-param>
<param-name>expiry-delay</param-name>
<param-value>10s</param-value>
</init-param>
<init-param>
<param-name>high-units</param-name>
<param-value>10000</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>datatransfer</cache-name>
<scheme-name>distributed-identifiable-evict-scheme</scheme-name>
<init-params>
<init-param>
<param-name>flush-delay</param-name>
<param-value>10s</param-value>
</init-param>
<init-param>
<param-name>expiry-delay</param-name>
<param-value>10s</param-value>
</init-param>
<init-param>
<param-name>high-units</param-name>
<param-value>10000</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>timeseries-log</cache-name>
<scheme-name>distributed-timeseries-log-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>alerts</cache-name>
<scheme-name>distributed-identifiable-evict-scheme</scheme-name>
<init-params>
<init-param>
<param-name>flush-delay</param-name>
<param-value>10s</param-value>
</init-param>
<init-param>
<param-name>expiry-delay</param-name>
<param-value>24h</param-value>
</init-param>
<init-param>
<param-name>high-units</param-name>
<param-value>600</param-value>
</init-param>
</init-params>
</cache-mapping>
<!-- BEGIN: com.oracle.coherence.patterns.command
The following section needs to be included in your application
Cache Configuration file to make use of the Command Pattern
-->
<cache-mapping>
<cache-name>sequence-generators</cache-name>
<scheme-name>distributed-scheme-for-sequence-generators</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>commands</cache-name>
<scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>contexts</cache-name>
<scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
<init-params>
<init-param>
<param-name>backing-map-listener-class-name</param-name>
<param-value>com.oracle.coherence.patterns.command.internal.ContextBackingMapListener</param-value>
</init-param>
</init-params>
</cache-mapping>
<!-- END: com.oracle.coherence.patterns.command -->
</caching-scheme-mapping>
<!-- ****************************************************************** -->
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-identifiable-persistent-scheme</scheme-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>binary-eventsource-local-scheme</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>container:com.core.cache.cachestores.{cachestore-name}</class-name>
</class-scheme>
</cachestore-scheme>
<cachestore-timeout>1800000</cachestore-timeout>
<write-delay>1</write-delay>
<write-requeue-threshold>50000</write-requeue-threshold>
</read-write-backing-map-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed-identifiable-persist-evict-scheme</scheme-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
<high-units>10000</high-units>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>container:com.core.cache.cachestores.{cachestore-name}</class-name>
</class-scheme>
</cachestore-scheme>
<cachestore-timeout>1800000</cachestore-timeout>
<write-delay>1s</write-delay>
<write-requeue-threshold>50000</write-requeue-threshold>
</read-write-backing-map-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed-identifiable-evict-scheme</scheme-name>
<thread-count>5</thread-count>
<backing-map-scheme>
<local-scheme>
<scheme-ref>eventsource-local-scheme</scheme-ref>
<flush-delay>{flush-delay}</flush-delay>
<expiry-delay>{expiry-delay}</expiry-delay>
<high-units>{high-units}</high-units>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<!--
********* A distributed store that contains identifiable *******
********* objects. Keys are all Ids and values are AbstractIdentifiable. ******
-->
<distributed-scheme>
<scheme-name>distributed-identifiable-scheme</scheme-name>
<thread-count>5</thread-count>
<backing-map-scheme>
<local-scheme>
<scheme-ref>binary-eventsource-local-scheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed-token-scheme</scheme-name>
<thread-count>1</thread-count>
<backing-map-scheme>
<local-scheme>
<scheme-ref>token-eventsource-local-scheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<!--
********* A replicated scheme with unlimited local storage *******
********* all items should extend from AbstractIdentifiable ******
-->
<replicated-scheme>
<scheme-name>replicated-identifiable-scheme</scheme-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>eventsource-local-scheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</replicated-scheme>
<!--
********* A timeseries scheme with limited local storage *******
********* all items should be keyed using a TimeseriesKey ******
-->
<distributed-scheme>
<scheme-name>distributed-timeseries-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<backing-map-scheme>
<local-scheme>
<scheme-ref>binary-eventsource-local-scheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed-timeseries-log-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<backing-map-scheme>
<local-scheme>
<scheme-ref>eventsource-local-scheme</scheme-ref>
<high-units>500</high-units>
<expiry-delay>4h</expiry-delay>
<flush-delay>10s</flush-delay>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<!-- BEGIN: com.oracle.coherence.patterns.command
The following section needs to be included in your application
Cache Configuration file to make use of the Command Pattern
-->
<distributed-scheme>
<scheme-name>distributed-scheme-with-backing-map-listener</scheme-name>
<backing-map-scheme>
<local-scheme>
<listener>
<class-scheme>
<class-name>{backing-map-listener-class-name com.oracle.coherence.common.backingmaplisteners.NullBackingMapListener}</class-name>
<init-params>
<init-param>
<param-type>com.tangosol.net.BackingMapManagerContext</param-type>
<param-value>{manager-context}</param-value>
</init-param>
</init-params>
</class-scheme>
</listener>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed-scheme-for-sequence-generators</scheme-name>
<backing-map-scheme>
<local-scheme>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</distributed-scheme>
<!-- END: com.oracle.coherence.patterns.command -->
<!--
********* A local scheme that pushes events to the eventrouter *******
-->
<local-scheme>
<scheme-name>eventsource-local-scheme</scheme-name>
<listener>
<class-scheme>
<class-name>container:com.core.cache.events.EventSourceBackingMapListener</class-name>
<init-params>
<init-param>
<param-type> com.tangosol.net.BackingMapManagerContext</param-type>
<param-value>{manager-context}</param-value>
</init-param>
</init-params>
</class-scheme>
</listener>
</local-scheme>
<local-scheme>
<scheme-name>binary-eventsource-local-scheme</scheme-name>
<unit-calculator>BINARY</unit-calculator>
<listener>
<class-scheme>
<class-name>container:com.core.cache.events.EventSourceBackingMapListener</class-name>
<init-params>
<init-param>
<param-type> com.tangosol.net.BackingMapManagerContext</param-type>
<param-value>{manager-context}</param-value>
</init-param>
</init-params>
</class-scheme>
</listener>
</local-scheme>
<!--
********* A local scheme that pushes events to the eventrouter, including DISTRIBUTION events. *******
-->
<local-scheme>
<scheme-name>token-eventsource-local-scheme</scheme-name>
<listener>
<class-scheme>
<class-name>container:com.core.cache.events.TokenEventSourceBackingMapListener</class-name>
<init-params>
<init-param>
<param-type>com.tangosol.net.BackingMapManagerContext</param-type>
<param-value>{manager-context}</param-value>
</init-param>
</init-params>
</class-scheme>
</listener>
</local-scheme>
<!--
********* The TCP Extend proxy scheme ****************************
-->
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count>5</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>localhost</address>
<port system-property="tangosol.coherence.ems.port">10001</port>
<reusable>true</reusable>
</local-address>
</tcp-acceptor>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
<use-filters>
<filter-name>wrapped-gzip</filter-name>
<filter-name>version-check-filter</filter-name>
</use-filters>
</acceptor-config>
<autostart system-property="tangosol.coherence.ems.enabled">true</autostart>
</proxy-scheme>
</caching-schemes>
</cache-config> -
Hello,
Can any one help me LSMW for MB1B tcode.
Header details : Movment type,Plant,storage location
Item details : Receiving plant,Mat no ,qty ,batch no.
Regards,
Chitraok,we can use recording.
one recording,we have 2 input structure ( Header + Item).
1.How will you make relationship between source and target.
we dont know,while execution how many item record will come for item part?
2.While recording how will you do it?
Regards,
Chitra -
Issue: No aut determination of Picking request during delivery creation
Hi!
I have created a new shipping point as copy of existing shipping point.
Issue is that for this new shipping point deliveries are created but not the picking request.
I can not find any difference in the set up w/regards to:
Plant
Storage Location
Item Category
Movement type
Delivery Type
FActory calendar
Picking location assigned.
Can somebody please help me out?
Tnx a lot!
BR
ChrisDear Siva,
As informed yesterday I changed the language from DE to EN, to match the other shipping points settings in table V_TVST, this did not bring the solution.
Please let me summarize, I am really desparate here:
This is only IM related, Not WM.
Picking lists are not printed for any Shipping Point from this warehouse, this is just a small subsidiary of my customer in Finland.
Issue is not Aut. PGI.
VP01SHP has not been configured for any shipping points, still there we do get the PR except for the new shipping point.
In the deliveries of correct processed shipping points I do not find any picking output type.
Item category in new shipping is equal to Item category in already existing shipping points, so no need to config here.
There is no picking block active.
PR creation happens once I enter the pick qty in the delivery in VL02N. This is the part that we need to have automated.
Can you please try to help me out?
Tnx & regards,
Chris -
Unable to mount flash drive w/ OS 9.2.2 and Powerbook G3
I've upgraded my G3 Pismo up to OS 9.2.2. However, the system won't mount a USB flash drive.
Here's a breakdown of the problem(s):
1) Insert flash drive, but flash drive won't appear on desktop.
2) Check System Profiler, and system freezes.
3) Remove flash drive from USB hub, and system unfreezes. System Profiler now has a listing for "Sandisk Cruzer Mini" in the USB -- even though the flash drive is no longer there.
4) Check extensions. There is no USB Mass Storage Support extension. Download from Apple site, the USB Mass Storage Support 1.3.5 extension.
5) Attempt to install this extension. Won't let me do it. Gives the message "This program cannot be run on your computer. See the documentation for details."
6) Check documentation. I have the requisite system: Powerbook G3 and OS 9.2.2.
7) Attempt to reinstall OS 9.2.2, as perhaps that might fix the problem.
8) Cannot re-install OS 9.2.2. Gives the message: "The application program 9.2.2.smi etc. cannot be opened because an error of type -39 occurred."
9) Clueless at this point.Hi, BGBchewy -
Re the USB Mass Storage Support item, the ReadMe that accompanies it and comprises the info on the download page states -
"Requirements
Mass Storage Support 1.3.5 requires Mac OS 8.6. You do not need to install this software if you have installed Mac OS 9."
Ditto the later version of that software, USB Adapter Card Support 1.4.1.
This user Tip may provide the assistance you need -
http://discussions.apple.com/thread.jspa?messageID=607556&
<hr>
Re the -39 error and the OS 9.2.2 download update, that error is most commonly caused by Stuffit Expander intruding where it should not. The download is in a .smi (self-mounting image) format, a format generated by Apple's Disk Copy. Stuffit Expander has ShrinkWrap technology built-in, which allows it to mount some kinds of disk images - but not .smi ones. Yet ShrinkWrap tries to - hence the error message.
The solution - double-click Stuffit Expander, and open its Preferences (under the File menu). Select "Disk Images" from the list on the left, then turn off (uncheck) "Mount Disk Images". Quit Stuffit Expander. -
Dear SAP Gurus
I want to know at what levels these units are defined in Delivery ( header or item or both )
Plant
Shipping Point
Route
Shipping condition
storage Location
Storage Condition
Gross weight and net weight
Transportaton group
Warehouse
Inco terms
Please provide me better Insights on thisHi,
Please find the details below,
Plant - Header level
Shipping Point - Header level
Route - Header level
Shipping condition -Header level
storage Location - Item Level
Storage Condition - item level
Gross weight and net weight - Item level
Transportaton group - Header level
Warehouse - Item level
Inco terms - Header level
Regards,
Ravi Duggirala -
WP8 c++/Direct3D Launch Parameters (ProtocolActivatedEventArgs problem)
Hi,
I need another help with WP8 app developing. I need to associate native c++/Direct3D app with file extension. It should not be problem - what I need is to read launch parameters to get file path. Problem is that ProtocolActivatedEventArgs
is not working, it is not contain any parameters. I even found some blog post and some guy has information that there is a mistake in MSDN documentation, but he don't have solution for this problem. (http://sanjeev.dwivedi.net/?p=369)
Can you please help me? How to solve that - open my app by clicking on *.gpx file.
ThanksHi Franklin,
thanks for your answer, but unfortunately it doesn't help - since I need that on C++/Direct3D project. As you can see in NavigationContext manual
there is no support for C++, especially "standard" C++ without component extension (C++/CX).
But I finally figure it out, with one small disadvantage - it needs to be targeted to WP8.1 universal app (for the future release of Windows 10 is actually an advantage :) ).
So here is the solution demo:
hr = m_view->add_Activated(Microsoft::WRL::Callback<ActivatedHandler>(
this, &Foo::OnActivated).Get(),
&m_activated_token);
HRESULT Foo::OnActivated(ICoreApplicationView* , IActivatedEventArgs* args)
// Here we can check if app is allready runnig (args->get_PreviousExecutionState) ...
ActivationKind kind;
if (!args || FAILED(args->get_Kind(&kind)))
return false;
ComPtr<IActivatedEventArgs> activatedArgs(args);
switch (kind)
case ActivationKind_Protocol:
ComPtr<IProtocolActivatedEventArgs> protocolArgs;
if (FAILED(activatedArgs.As(&protocolArgs)))
break;
ComPtr<ABI::Windows::Foundation::IUriRuntimeClass> startUri;
if (FAILED(protocolArgs->get_Uri(&startUri)))
break;
HSTRING uriHstring;
unsigned int length;
startUri->get_AbsoluteUri(&uriHstring);
// lets handle uriHstring in our way...
break;
case ActivationKind_File:
ComPtr<IFileActivatedEventArgs> fileActivatedArgs;
if (FAILED(activatedArgs.As(&fileActivatedArgs)))
break;
ComPtr<ABI::Windows::Foundation::Collections::IVectorView<ABI::Windows::Storage::IStorageItem*>> list;
HRESULT hr = fileActivatedArgs->get_Files(list.GetAddressOf());
ComPtr<ABI::Windows::Storage::IStorageItem> item;
if (FAILED(list->GetAt(0, item.GetAddressOf())))
break;
ComPtr<ABI::Windows::Storage::IStorageFile> file;
if (FAILED(item.As(&file)))
break;
// lets handle file in our way...
break;
return hr;
Maybe you are looking for
-
Output type neu not coming only one vendor
Hi I have maintained for my condition record in nace t.code for purchase order key combinatinon for document type ZCPO so when ever i creating po in me21n t.code using doc type ZCPO my outputype neu coming automaticaly,only one vendor code for my do
-
How to transfer movies to ipad 3 from macbook pro
how to transfer movies to ipad 3 from macbook pro?
-
Data Model: Keeping models in sync
Morning, I'm not clear what exactly i need to do to keep models in sync? Here is the situation: 1. Original DB (schema) doesn't have a logical model avaialble. 2. I reverse eng the schema and proceeded working on the logical model. My changes to the
-
Make to order - G/L Account no.
Hi all I am doing one Make to order scenario in which if I go and check incompletion log the system is saying Missing data - G/L ACCOUNT NO. Can any one please suggest me why this incompletion is coming. Note: If I create the order and check incomple
-
Forbid a user group to modify an specific metadata field??
Is it possible to forbid a user group to MODIFY an specific metadata field or group of fields? I know you can create a new metadata SET and forbid the user group to modify the entire set, but what about groups or fields?? Any help is welcome