Best way to determine if document is truly unsaved

Is there a preferred method for determining whether a file is truly unsaved? I'm distinguishing between files that have have been saved and then had some changes made to them (but have a valid name and filePath) and files that have just been created during this session and have not been saved anywhere (other than a temp directory). The only thing I can think of is checking whether an error is thrown by the filePath property; is this really the best way?
(I have looked into both the saved and modified properties, but neither provide this information I am looking for. In theory I could check the name property to see if it starts with "Untitled" but there's nothing stopping a user from actually calling a file that so I'd rather not.)
Quick background: I have a script that lets a user browse to some files and get information about them. Files should be closed after processing, unless of course they were open to begin with. So, I do a check of each file against the documents open when the script starts running and set a flag. Just want to make sure that there isn't a better solution than catching the error.
Brief code:
    var openDocs = app.documents.everyItem().getElements();
    for (var i = 0; i < funcDocs.length; i++)
        var funcDoc = funcDocs[i];
        var fileInUse = false;
        if (funcDoc instanceof File)
            if (openDocs.length == 0) {fileInUse = false;}
            else
                for (var f = 0; f < openDocs.length; f++)
                    try{
                         openDocs[f].filePath;
                    }catch(e){continue;}
                    if (openDocs[f].name == funcDoc.name && openDocs[f].filePath == funcDoc.path)
                        fileInUse = true;
                        break;
            var openDoc = app.open(funcDoc, false, 1147563124);
        else
            var openDoc = funcDoc;
            fileInUse = true;
       //Do something...
        if (fileInUse == false) {openDoc.close();}
I guess I'm looking for a bit of a sanity check that I'm not missing some more straightforward method here. Thanks in advance!

Help>About
Hold down CTRL or CMD and it gives a complete document history.

Similar Messages

  • What is the best way to create business documents in CRM

    Hi All,
    What is the best way to create business documents like contract, sales order, debit memo etc in CRM ? Unlike R3 we can't use our good old BDC with recording. Moreover for most of them although there are Business Object but no BAPI to creation so what is the way ? I found in SDN there are two MAGIC Function module CRMXIF_ORDER_SAVE. Do I need to that alawys ?
    Is it nees to via IDoc and cannot be done just by calling from ABAP program ? The input parameter of the FM is a complex deep structure.
    Please help.

    Ashim,
    Try looking at the program:
    CRM_TEST_ORDER_MAINTAIN
    I think that should help you figure out the parameters.
    Good luck,
    Stephen

  • Best way to determine insertion order of items in cache for FIFO?

    I want to implement a FIFO queue. I plan on one producer placing unprocessed Orders into a cache. Then multiple consumers will each invoke an EntryProcessor which gets the oldest unprocessed order, sets it processed=true and returns it. What's the best way to determine the oldest object based on insertion order? Should I timestamp the objects with a trigger when they're added to the cache and then index by that value? Or is there a better way? maybe something coherence automatically saves when objects are inserted? Also, it's not critical that the processing order be precisely FIFO, close is good enough.
    Also, since the consumer won't know the key value for the object it will receive, how could the consumer call something like this so it doesn't violate Constraints on Re-entrant Calls? http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Thanks,
    Andrew

    Ok, I think I can see where you are coming from now...
    By using a queue for each for each FIX session then you will be experiencing some latency as data is pushed around inside the cluster between the 'owning node' for the order and the location of the queue; but if this is acceptable then great. The number of hops within the cluster and hence the latency will depend on where and how you detect changes to your orders. The advantage of assiging specific orders to each queue is that this will not change should the cluster rebalance; however you should consider what happens if the node controlling a specific FIX session is lost - do you recover from FIX log? If so where is that log kept? Remember to consider what happens if your cluster splits, such that the node with the FIX session is still alive, but is separated from the rest of the cluster. In examining these failure cases you may decide that it is easier to use Coherence's in-built partitioning to assign orders to sessions father than an attribute of order object.
    snidely_whiplash wrote:
    Only changes to orders which result in a new order or replace needing to be sent cause an action by the FIX session. There are several different mechanisms you could use to detect changes to your orders and hence decide if they need to be enqueued:
    1. Use a post trigger that is fired on order insert/update and performs the filtering of changes and if necessary adds the item to the FIX queue
    2. Use a cache store that does the same as (1)
    3. Use an entry processor to perform updates to the order object (as I believe you previously mentioned) and performs logic in (1)
    4. Use a CQC on the order cache
    5. A map listener on the order cache
    The big difference between 1-3 and 4, 5 is that the CQC is i) a SPOF ii) not likely located in the same place as your order object or the queue (assuming that queue is in fact an object in another cache), iii) asynchronously fired hence introducing latency. Also note that the CQC will store your order objects locally whereas a map listener will not.
    (1) and (3) will give you access to both old and new values should that be necessary for your filtering logic.
    Note you must be careful not to make any re-entrant calls with any of 1-3. That means if you are adding something to a FIX queue object in another cache (say using an entry processor) then it should be on a different cache service.
    snidely_whiplash wrote:
    If I move to a CacheStore based setup instead of the CQC based one then any change to an order, including changes made when executions or rejects return on the FIX session will result in the store() method being called which means it will be called unnecessarily a lot. It would be nice if I could specify the CacheStore only store() certain types of changes, ie. those that would result in sending a FIX message. Anything like that possible?There is negligible overhead in Coherence calling your store() method; assuming that your code can decide if anything FIX-related needs to be done based only on the new value of the order object then this should be very fast indeed.
    snidely_whiplash wrote:
    What's a partitioned "token cache"?This is a technique I have used in the past for running services. You create a new partitioned cache into which you place 'tokens' representing a user-defined service that needs to be run. The insertion/deletion of a token in the backing map fires a backing map listener to start/stop a service +(not there are 2 causes of insert/delete in a backing map - i) a user ii) cluster repartitioning)+. In this case that service might be a fix session. If you need to designate a specific member on which a service needs to run then you could add the member id to the token object; however you must be careful that unless you write your own partitioning strategy the token will likely not live on the same cache member as the token indicates; in which case you would want a ful map listener or CQC to listen for tokens rather than a backing map listener
    I hope that's useful rather than confusing!
    Paul

  • What the best way for STORAGE the documents from DMS?

    Please, can someone tell me if the limit of storage the originals through a data carrier is two for document or i can to large this limit? 'Cause i know that using Kpro i don't have limit of storage, but i'm trying to use the data carriers 'cause i just have a place in the company server for storage this originals.
    I'm trying to send the originals from a path like this
    abcs01\dms$\ through of a configuration "servidor, front ends", but i'm not get to send the originals from CV01N for this place, even the document been save. I verify that the documents were not sent when i enter from the CV02N in the same document and try to change or screen the originals, 'cause i receive a error message saying that the SAP doesn't get to find the originals on the path indicated.
    Did someone just work on a likely situation?
    Did someone have a view about the best way to storage this documents in this case?
    If the best way will be KPRO, Do I use KPRO with the Content Server or KPRO with storage the documents in a SAP table through of the standard system aplication "DMS_C1"?
    Any idea will be welcome and reward by good points!
    Best Regards.

    Hi,
    The decision to go for either content server or vault depends on what data will be stored. If you are storing drawing data it is highly recommended to go for content server.
    For vault storage performance issues are reported.
    DMS_C1 is IDES based BSP database repository. It is useful only for dev/testing.
    You will be required to create storage repository for CS.
    If you have server available when you insert content server CD inside it, it automatically asks you for maxDB database instance creation. As you click on next it will take you further to create maxDB instance. K-Pro is service on web server. It is used to manage the checkin-checkout and other functionalities of the CS server data.
    BR,
    Anirudh,
    reward if useful.

  • What is the best way to import word documents from a PC to the Ipad2

    What is the best way to import word documents from a PC to the Ipad 2?

    You first need an app on the iPad that supports word documents - the iPad doesn't have a file structure like a 'normal' computer, and every file/document needs to be associated with an app. If you don't have such an app then options include Apple's Pages app and third-party apps such as Documents To Go and QuickOffice HD. How you then get the documents on the iPad will depend upon the app that you have/get as different apps may use different methods e.g. via the file sharing section at the bottom of the device's apps tab when connected to iTunes, via wifi, email, dropbox etc.

  • Best way to determine if JMS server is alive in a cluster

    Can anyone give me an idea on the best way to find out if a JMS server
              in a cluster
              has failed so I can signal migration to another server in the cluster.
              Thanks Larry
              PS weblogic 7.0 sp1
              

    Hallo Larry,
              you can go via JMX and retrieve the according RuntimeMBeans in order to
              check the health state of the
              JMSServer resp. the server hosting the JMSServer. If they are not available
              or failed you can trigger the
              migration. At least that's the way I'm doing it...
              try
              JMSServerMBean jmsServer = null;
              ServerMBean candidateServer = null;
              MigratableTargetMBean migratableTarget = null;
              * Retrieve all JMSServer defined for the current domain
              Set jmsServerSet = home.getMBeansByType("JMSServer", domainName);
              Object[] jmsServers = jmsServerSet.toArray();
              * Just the first one is picked assuming there is only one defined
              * within the active FHO domain
              if(jmsServers != null && jmsServers.length > 0)
              jmsServer = (JMSServerMBean) jmsServers[0];
              if(s_logger.isDebugEnabled())
              s_logger.debug("JMSServer: " + jmsServer.getName());
              * A JMSServer can only be associated with a single target,
              * thus pick again the first from the list.
              TargetMBean[] targets = jmsServer.getTargets();
              if(targets != null && targets.length > 0)
              boolean hostingServerRunning = false;
              boolean candidateServerRunning = false;
              * Check whether the JMSServer is really associated with
              * a migratable target. Otherwise the migration must be canceled
              * since it cannot be performed!
              if(targets[0] instanceof MigratableTargetMBean)
              migratableTarget = (MigratableTargetMBean) targets[0];
              * Retrieve all available candidates and select a running instance
              * if any. First check for constrained candidate servers, than for
              * all candidate servers
              ServerMBean[] candidates =
              migratableTarget.getConstrainedCandidateServers();
              if(candidates == null || candidates.length == 0)
              candidates = migratableTarget.getAllCandidateServers();
              if(candidates != null && candidates.length > 0)
              ServerMBean hostingServer = migratableTarget.getHostingServer();
              boolean gotHostingServer = false;
              boolean gotCandidateServer = false;
              boolean runningInstance = false;
              * Loop over all candidates as long as hosting server and candidate
              * server are visited and there running state has been determined
              for(int i=0; i< candidates.length; i++)
              ServerRuntimeMBean serverRuntime = null;
              * Retrieve the current state from the according runtime MBean
              * if available
              try
              serverRuntime = (ServerRuntimeMBean) home.getMBean(new
              WebLogicObjectName(candidates.getName(), "ServerRuntime", domainName,
              candidates[i].getName()));
              runningInstance =
              serverRuntime.getState().equalsIgnoreCase(ServerRuntimeMBean.RUNNING);
              catch(InstanceNotFoundException inf)
              * When a server instance is not available, an InstanceNotFoundException will
              be raised
              * by WLS, which can be ignored
              if(hostingServer != null && hostingServer.equals(candidates[i]))
              hostingServerRunning = runningInstance;
              gotHostingServer = true;
              else
              * A running candidate server will be prefered, thus only if no running
              * instance can be detected, another instance is selected
              if(!gotCandidateServer)
              candidateServerRunning = runningInstance;
              candidateServer = candidates[i];
              gotCandidateServer = runningInstance;
              if(gotCandidateServer && gotHostingServer)
              break;
              if(s_logger.isDebugEnabled())
              s_logger.debug("Migratable Target: " + migratableTarget.getName());
              s_logger.debug("Candidate Server: " + candidateServer.getName());
              else
              throw new Exception("JMSServer not deployed on a migratable target!");
              * Retrieve the migration service coordinator for the active domain assuming
              * there exists only one and invoke the migration later on
              MigratableServiceCoordinatorRuntimeMBean coordinator = null;
              Set coordinatorSet =
              home.getMBeansByType("MigratableServiceCoordinatorRuntime", domainName);
              Object[] coordinators = coordinatorSet.toArray();
              if(coordinators.length > 0)
              coordinator = (MigratableServiceCoordinatorRuntimeMBean) coordinators[0];
              if(enforceMigrationOnInstancesDown)
              coordinator.migrate(migratableTarget, candidateServer, hostingServerRunning,
              candidateServerRunning);
              else
              coordinator.migrate(migratableTarget, candidateServer);
              s_logger.info("Migration of JMSServer from node "
              + migratableTarget.getName()
              + " to node "
              + candidateServer.getName()
              + " has been started");
              else
              throw new Exception("MigrationServiceCoordinator cannot be retrieved");
              catch(Exception e)
              s_logger.error("Could not migrate JMSServer", e);
              Regards,
              CK
              "Larry Presswood" <[email protected]> schrieb im Newsbeitrag
              news:[email protected]...
              > Can anyone give me an idea on the best way to find out if a JMS server
              > in a cluster
              > has failed so I can signal migration to another server in the cluster.
              >
              > Thanks Larry
              >
              > PS weblogic 7.0 sp1
              >

  • Best way to determine optimal font size given some text in a rectangle

    Hi Folks,
    I have a preview panel in which I am showing some text for the current selected date using a date format.
    I want to increase the size of the applied font so that it scales nicely when the panel in which it is drawn is resized.
    I want to know the best way in terms of performance to achieve the target. I did some reading about AffineTransform and determining by checking ina loop which is the correct size, but it does not feel like a good way.
    I would appreciate some tips.
    Cheers.
    Ravi

    import java.awt.*;
    import java.awt.font.*;
    import java.awt.geom.*;
    import javax.swing.*;
    public class ScaledText extends JPanel {
        String text = "Sample String";
        protected void paintComponent(Graphics g) {
            super.paintComponent(g);
            Graphics2D g2 = (Graphics2D)g;
            g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                                RenderingHints.VALUE_ANTIALIAS_ON);
            Font font = g2.getFont().deriveFont(16f);
            g2.setFont(font);
            FontRenderContext frc = g2.getFontRenderContext();
            int w = getWidth();
            int h = getHeight();
            float[][] data = {
                { h/8f, w/3f, h/12f }, { h/3f, w/4f, h/8f }, { h*3/4f, w/2f, h/16f }
            for(int j = 0; j < data.length; j++) {
                float y = data[j][0];
                float width = data[j][1];
                float height = data[j][2];
                float x = (w - width)/2f;
                Rectangle2D.Float r = new Rectangle2D.Float(x, y, width, height);
                g2.setPaint(Color.red);
                g2.draw(r);
                float sw = (float)font.getStringBounds(text, frc).getWidth();
                LineMetrics lm = font.getLineMetrics(text, frc);
                float sh = lm.getAscent() + lm.getDescent();
                float xScale = r.width/sw;
                float yScale = r.height/sh;
                float scale = Math.min(xScale, yScale);
                float sx = r.x + (r.width - scale*sw)/2;
                float sy = r.y + (r.height + scale*sh)/2 - scale*lm.getDescent();
                AffineTransform at = AffineTransform.getTranslateInstance(sx, sy);
                at.scale(scale, scale);
                g2.setFont(font.deriveFont(at));
                g2.setPaint(Color.blue);
                g2.drawString(text, 0, 0);
        public static void main(String[] args) {
            JFrame f = new JFrame();
            f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            f.getContentPane().add(new ScaledText());
            f.setSize(400,400);
            f.setLocationRelativeTo(null);
            f.setVisible(true);
    }

  • Pricing: Best way to determine an partial condition amount

    Hi,
    In our scenario MWST (VAT) is calculated from multiple values/conditions (price and postage&package costs).
    What is the best way to extract the VAT value for only one of these conditions? I can only think of extracting this value via ABAP within a routine however this value isn't rounded? Is there a more easy way via customizing/conditions to extract this value?
    We need this value to calculate/sum the net-value for postage and package costs.
    Kind regards,
    Tim

    What is the best way to extract the VAT value for
        only one of these conditions?
    If the requirement will not vary from billing to billing, then you can accordingly assign the From-To step number in your pricing procedure. 
    thanks
    G. Lakshmipathi

  • Best way to Organise - Shared Documents with Others and to iPad2

    I have a large number of iWork documents on my iMac. Some of these documents I wish to share with others and also to my iPad2.
    Previously I used the iWork.com BETA and this was integrated with Pages, Numbers and Keynote through the 'Share' iCon in each application.  This was very convenient linked with email and worked seamlessly.  However now I have to Upload and Download documents through the iCloud website, issue separate email invitation. A more manually intensive way of achieving the previous 'slick' method.
    Has anyone found a good way of organising and distributing documents?
    Any suggestions for a good iWork.com replacements?

    That is a quesiton I also have. It seems that there is no way to do this currently. When it comes to documents, iCloud is more or less useless because (a) it can only sync files that are stored inside the apps and (b) in iOS you can't move files out of the apps.
    The best solution I know about is "wuala". It syncs everything you want on your Mac and it runs on all common systems. So I have synced all my documents across my Macs, iPhone and iPad and can share files & folders with others.
    The only problem now is, that once you open a file on the iPhone or iPad, it will be instantly saved there (inside the app) and you can't move it to the original location (the sync folder in wuala). This part breaks the whole concept and I don't know any alternative.

  • Best way to determine what objects has been selected in a Collection?

    Hi all
    I´m currently developing an application where a user can create a PDF based on the choices made from multiple collections.
    Each collection contatins 10-50 items, and there are about 8 different collections with objects.
    Checkboxes are used to select items from these collections.
    I´m wondering how I would best determine what choices have been made, and if would be good to remove all objects from
    a collection that has NOT been chosen?
    Currently, it looks like this (exemple for one collection, but same solution is used for all collections)
    private Collection<Texture> textureList = new ArrayList<Texture>();
    private ArrayList<Texture> textureResult = new ArrayList<Texture>();
              for (Texture t : textureList) {
                   if (t.isSelected()) {
                        textureResult.add(t);
              }After this iteration, textureResult is used to create the PDF.
    This PDF contains lists with dynamic frames, so I need to now how many
    items was selected before creating the PDF.
    Wondering if this is the best/most efficient way to do this though?
    Maybe it doesn´t matter all that much with lists this small, but I´m still curios :-)
    I guess you could do something like this aswell
              while (textureList.iterator().hasNext()) {
                   Texture t = (Texture) textureList.iterator().next();
                   if (!t.isSelected()) {
                        textureList.iterator().remove();
              }Any suggetions?

    Dallastower wrote:
    I´m wondering how I would best determine what choices have been made,Are you asking how to determine which boxes have been checked? Or do you know how to do that and you're asking how to associate those boxes with items in your Collections? Or do you know how to do that and you're asking how to keep track of those selected items in the Collection?
    I don't do GUIs, so I can't help with the first two, but for the third, you could create a new collection holding just the selected ones, or remove the unselected ones from the original Collection.
    and if would be good to remove all objects from
    a collection that has NOT been chosen?That's entirely up to you. If you create the original Collection when the user makes his selection, and only need it to survive one round of selection, that may be fine. But if you need to get back to the original collection later, and it's expensive to create, then you might want to just create a second collection and add items from the original to it if they're selected.

  • Best way to send GR document to email destination output printing

    When a goods receipt document is printed, I need to send an email with the document attached.  It does not matter if it goes to an external email address or to the SAPoffice (Workplace). 
    If it can not be triggered during GR output creation, I could just do it when the GR is processed.
    What is the best approach?  Can this be done in output determination or is there a user exit?
    Thanks!
    Norm

    This looks very helpful, but I still have one problem. 
    The requirement will allow me to check the PLANT.  This is perfect.  However, I can only set this up once per condition type.
    Details...
    I am using WE03.  WE03 needs to print.  I cannot define a requirement to stop WE03 for email, because it will stop it from printing.
    In SD, I would create a new output condition for email (like ZE03).  I could have a requirement on ZE03 to check PLANT and use it for email.
    I thought there were issues with new output types in MM.  The standard SAP processing programs have values hard coded.  I remember seeing things like "WE" being concatenated to "03" to form "WE03" in standard SAP code.  New output types like ZE03 will not work. 
    Maybe this has changed?  Maybe I am wrong?
    I will try it.

  • Best way to update CO documents

    Hi friends ,
                       We have figured out few CO documents where we need to update material costs  in Ke24 (txn) and in table Ce14000 in SAP. IS there any best approach to fix this data ? We have figured out few but would like to advice from experts.
    Thanks in advance .

    Hi
    BAPI is an upload tool just like LSMW... In LSMW you do a recording to map the fields...
    Similarly, BAPI allows you to write a code to do mapping... You will have to tell your ABAPer all those chars and value fields which you would like to upload.. I would suggest map all the chars and value fields so that it would serve useful in future use also
    Dont worry on the technical aspect of BAPI.. your abaper will be comfortable with it... Just post a manual document from KE21N infront of him and tell him this is what you expect from the upload program via BAPI
    he will create this program using the BAPI and create a T code from SE93 assigning this program
    Hope this helps
    Regards
    Ajay M

  • What's the best way to determine which row a user clicked on via a link?

    Hello. Probably simple question, but my googleing is failing me. I have a table, with a column that is a command link. How can I determine which row the user clicked on? I need to take that value, and pass it to a different page to bind it for a different query. I was thinking of setting the result in a session bean? Or is there a better way?
    Thanks!

    Hi,
    You have two options:
    1. (Complex) Have your ActionListener evaluate the event to get the source, then climb the component tree up to the table and get the current row data;
    2. (Simple) Add a setPropertyActionListener to the link with value="#{var}" target="#{destination}" where var is the table's var attribute value and destination is your managed bean that required the clicked row.
    Regards,
    ~ Simon

  • Best way to serialize large document to XML file

    I'm kind of new to XML programming, and am having trouble with streaming very large
    content to an XML file. Basically, I need to read/validate/convert a large CSV
    file to an XML document. I've had success for small documents using JAXP, but
    encountered problems when the size of the files got larger.
    Instead of loading the entire CSV file in memory, validating and outputting the
    entire document to a XML file, I want the ability to read a single line, validate
    it, convert it to an element and output the single element to XML. The problem
    is that with Xerces, DOM, etc. serialization routines, they don't give me the
    ability to control the writing of elements in the desired way.
    For example,
    <Parent>
    <Child> (represents 1st record of CSV)
    </Child>
    <Child> (represents 2nd record of CSV)
    </Child>
    </Parent>
    I want the ability to
    1) stream the Parent start tag to the XML file
    2) create an element from the content in the CSV record
    3) stream the new element to the XML file
    4) repeat steps 2-3 until end of CSV file
    5) write the Parent end tag
    It seems like all serializers don't allow this behavior. They only allow a complete
    element to be written. Since I don't have a complete Parent element in memory,
    this will not work for me. Will stax work for this particular problem?

    Hi Joe,
    What kinds of problems were you getting with JAXP when you scaled up the
    size of the doc?
    StAX, it seems, would not help in the generation of the XML, but could
    you not create a DOM for each child and write them separately, enclosing
    the entire doc in the parent tags? Just a suggestion.
    Thanks,
    Bruce
    Joe Miller wrote:
    >
    I'm kind of new to XML programming, and am having trouble with streaming very large
    content to an XML file. Basically, I need to read/validate/convert a large CSV
    file to an XML document. I've had success for small documents using JAXP, but
    encountered problems when the size of the files got larger.
    Instead of loading the entire CSV file in memory, validating and outputting the
    entire document to a XML file, I want the ability to read a single line, validate
    it, convert it to an element and output the single element to XML. The problem
    is that with Xerces, DOM, etc. serialization routines, they don't give me the
    ability to control the writing of elements in the desired way.
    For example,
    <Parent>
    <Child> (represents 1st record of CSV)
    </Child>
    <Child> (represents 2nd record of CSV)
    </Child>
    </Parent>
    I want the ability to
    1) stream the Parent start tag to the XML file
    2) create an element from the content in the CSV record
    3) stream the new element to the XML file
    4) repeat steps 2-3 until end of CSV file
    5) write the Parent end tag
    It seems like all serializers don't allow this behavior. They only allow a complete
    element to be written. Since I don't have a complete Parent element in memory,
    this will not work for me. Will stax work for this particular problem?

  • Best Way to Determine if a Table is used in a Report?

    Hello,
    I'm looking to modify an existing report developed by someone previously employed.  I believe they are joining unneeded tables and I'd like to remove them. Is there a method that is useful to determine if a table/view is not being used in the report?
    Just looking for any suggestions/advice on how to handle this.
    Thanks,

    Hi Trey,
    Try this.
    In your report just goto field explorer ->database fields.
    It will the command objects/tables that are added in your report.
    If you want see which command object/table used in your report just expand those and check any of the fields belongs to the command objects contains a tick mark .
    If there is a tick mark the fields that means the fields are using in that report.

Maybe you are looking for