ECCS : Data Collection tasks

Hello Gurus,
I want to customize my system so that manual entries tasks, flexible upload and rollup tasks are the ONLY available tasks into the "Data Collection task".
However, the system default an option called "Additional financial data" where "supplier data" can be imported whereas I did not customize anything about this option. I don't find the solution to delete this option into my "data Collection task" layout.
Do you have any solution for me?
Thank you in advance for your help
Best regards
Pascal

Hello Dan and thank you for this first reply,
- For "data collection" task group I don't have the possibility to define the tasks to be used or not (the system apply his logic depending on different custo parameters).
- Regarding the "profit-in-inventory" option into the dimension, the system only gives the options to set the product group usage as as an independent characteristic or existing custom characteristic but there is no option that indicates the profit-in-inventory as not being implemented...
Is there anything else I should check? If yes, could you please give me more details?
Thank you in advance for your help
best regards
Pascal

Similar Messages

  • Data Collection task error (Item  is not defined in Cons Chart of Accts 01)

    Dear Experts,
    While executing Data collection task, I am getting error "Item  is not defined in Cons Chart of Accts 01", What could be the reason?
    (Many Items are getting collected in target, but at the topline this error is been shown)
    Regards
    Ritesh M

    Thans Dan,
    I have gone in the program UGMDSYNC, and getting few Gls which have not assigned Cons Chart  of account.
    Kindly tell me how to assign chart of account to these GL account?

  • Data collection task execution in UCMON

    Dear Friends
    I am using flexible upload method and load from data streams for data upload. Both methods are assigned to my data collection method which in turn is assigned to the task. When i execute the data collection task in consolidation monitor the system executes only the load from data streams and does not give me the option of selecting the method to execute.
    To differenciate it i created a new task and assigned only flexible upload method and when i execute this task the system still executes the load from data streams method.
    Can anyone guide me as to where i am going wrong?
    Cheers
    Shivkumar

    Hi Mani
    I could not follow what you meant to say
    "I have done same what u describe, but do not have the issue. Create seperate method for data steam and FU. Then merged both in one method and assigned such method to one task and no issue.. I face that u described,
    Can you assign the method to the task, starting from the same starting period and check again."
    Do you mean to say that you did whatever i am doing and you too faced the same problem. But the strange behaviour of the system is when i execute it in "Execute with selection screen" mode the system shows all the methods, however if i run it in update mode the system executes only the "Load from data streams" even if the "Flexible upload" method is assigned.
    BTW where do you set the validity for a task?
    Cheers
    Shivkumar

  • Consolidation Group in Target Cube (Data Collection Task)

    Dear Experts,
    In consolidation Monitor, While doing Data Collection task via Load from Data Stream, after doing Updation, WHen I see the content of Target InfoCube in RSA1, In GL account Line item I do not get COnsolidation Group Value.
    Like
    GL account   Company  CCode  Cons Group  Currency  PV LC    PV GC
    100000           3000         3000     (Blank)          USD         1000      44000
    1) Is it necessary to Have Value in Consolidation Group?
    2) If Yes, What is the utility and how to get value in this column.?
    Regards
    Ritesh M.

    No, ConsGoups are determined later, during the ConsGroup-dependent tasks.

  • Data collection task in Update mode is not posible with Original list

    Hi Friends,
    When I try to Execute the Task for data collection in sa38.I am getting the Error with in 9 sec.
    UPDATE MODE IS NO POSSIBLE WITH ORIGINAL LIST
    Message No. UCS0111
    Diagnosis :
    If you select the indicator"original list"the current task status is ignored.In this case,the system simulates a task run in with all of the organisational units are not yet blocked.Thus,setting this indicator is meaningful and valid only if the task is executed in test mode at tthe same time.
    procedure : select the original list & Test run together.
    My problem is I can not select the Test run in production  server.
    Can any one please help me to solve the Issue.

    I Thank you all for Responding to me.
    Hi Dan,
    SA38 provides all the needed parameters- Con.Group,Company,Version,Fis.Year,Period,Group currency.
    Iam trying to run the task with SA38 for Data Collection.
    Hi Lucio,
    When I Check the Fields Both the  Log & Original list . The programe gives an error message "UPDATE MODE IS NOT POSSIBLE WITH ORIGINAL LIST"
    Even when I tried with Log,Test Run & Original List Iam getting the Same error Message "UPDATE MODE IS NOT POSSIBLE WITH ORIGINAL LIST"
    I can run the task Successfully with selecting the Log option only. But my Client need  to Use both Log & Original list ( as they used to run in BIW 3.5 ) Now it we are using BI 7.
    Hi Christopher,
    We are running the programe UCBATCH01,with the T code SA38.
    Hi Eugene,
    I will look in to the link and i will Discuss with the Basis team.
    Once again Thank you all for responding
    Suraj.

  • Data collection was switched from an AI Config task writing to an hsdl file to synchronized DAQmx tasks logging to TDMS files. Why are different readings produced for the same test?

    A software application was developed to collect and process readings from capacitance sensors and a tachometer in a running spin rig. The sensors were connected to an Aerogate Model HP-04 H1 Band Preamp connected to an NI PXI-6115. The sensors were read using AI Config and AI Start VIs. The data was saved to a file using hsdlConfig and hsdlFileWriter VIs. In order to add the capability of collecting synchronized data from two Eddy Current Position sensors in addition to the existing sensors, which will be connected to a BNC-2144 connected to an NI PXI-4495, the AI and HSDL VIs were replaced with DAQmx VIs logging to TDMS. When running identical tests, the new file format (TDMS) produces reads that are higher and inconsistent with the readings from the older file format (HSDL).
    The main VIs are SpinLab 2.4 and SpinLab 3.8 in folders "SpinLab old format" and "Spinlab 3.8" respectfully. SpinLab 3.8 requires the Sound and Vibration suite to run correctly, but it is used after the part that is causing the problem. The problem is occuring during data collection in the Logger segment of code or during processing in the Reader/Converter segment of code. I could send the readings from the identical tests if they would be helpful, but the data takes up approximately 500 MB.
    Attachments:
    SpinLab 3.8.zip ‏1509 KB
    SpinLab 2.4.zip ‏3753 KB
    SpinLab Screenshots.doc ‏795 KB

    First of all, how different is the data?  You say that the reads are higher and inconsistent.  How much higher?  Is every point inconsistent, or is it just parts of your file?  If it's just in parts of the file, does there seem to be a consistent pattern as to when the data is different?
    Secondly, here are a couple things to try:
    Currently, you are not calling DAQmx Stop Task outside of the loop; you're just calling DAQmx Clear Task.  This means that if there were any errors that occured in the logging thread, you might not be getting them (as DAQmx Clear Task clears outstanding errors within the task).  Add a DAQmx Stop Task before DAQmx Clear Task to make sure that you're not missing an error.
    Try "Log and Read" mode.  "Log and Read" is probably going to be fast enough for your application (as it's pretty fast), so you might just try it and see if you get any different result.  All that you would need to do is change the enum to "Log and Read", then add a DAQmx Read in the loop (you can just use Raw format since you don't care about the output).  I'd recommend that you read in even multiples of the sector size (normally 512) for optimal performance.  For example, your rate is 1MHz, perhaps read in sizes of 122880 samples per channel (something like 1/8 of the buffer size rounded down to the nearest multiple of 4096).  Note: This is a troubleshooting step to try and narrow down the problem.
    Finally, how confident are you in the results from the previous HSDL test?  Which readings make more sense?  I look forward to hearing more detail about how the data is inconsistent (all data, how different, any patterns).  As well, I'll be looking forward to hearing the result of test #2 above.
    Thanks,
    Andy McRorie
    NI R&D

  • SP2013 alternative for Collect Data workflow task?

    SP2010 offered a workflow action named Collect Data, allowing for easy data collection from users without having to create custom ASPX forms. Apparently, SP2013 no longer offers this feature.
    I'm doing a SP2013 project now that absolutely inhibits me from doing any custom development, so I'm trying to find an easy way to collect data from users in a SP2013 without having to do custom coding. Are there alternatives?

    Hi,
    I understand that there is a big list of workflow actions that are not available in SharePoint 2013. However, SharePoint 2013 still allows users to create workflows on the SharePoint 2010 platform. If you need these legacy actions, make sure you select the
    "Platform Type" as SharePoint 2010 when you create the workflow. You cannot change the platform type once the workflow is created.
    Source link:
    What's new in workflow in SharePoint Server 2013
    http://technet.microsoft.com/en-us/library/jj219638.aspx
    Tracy Cai
    TechNet Community Support

  • SQL Query using a Variable in Data Flow Task

    I have a Data Flow task that I created. THe source query is in this file "LPSreason.sql" and is stored in a shared drive such as
    \\servername\scripts\LPSreason.sql
    How can I use this .sql file as a SOURCE in my Data Flow task? I guess I can use SQL Command as Access Mode. But not sure how to do that?

    Hi Desigal59,
    You can use a Flat File Source adapter to get the query statement from the .sql file. When creating the Flat File Connection Manager, set the Row delimiter to a character that won’t be in the SQL statement such as “Vertical Bar {|}”. In this way, the Flat
    File Source outputs only one row with one column. If necessary, you can set the data type of the column from DT_STR to DT_TEXT so that the Flat File Source can handle SQL statement which has more than 8000 characters.
    After that, connect the Flat File Source to a Recordset Destination, so that we store the column to a SSIS object variable (supposing the variable name is varQuery).
    In the Control Flow, we can use one of the following two methods to pass the value of the Object type variable varQuery to a String type variable QueryStr which can be used in an OLE DB Source directly.
    Method 1: via Script Task
    Add a Script Task under the Data Flow Task and connect them.
    Add User::varQuery as ReadOnlyVariables, User::QueryStr as ReadWriteVariables
    Edit the script as follows:
    public void Main()
    // TODO: Add your code here
    System.Data.OleDb.OleDbDataAdapter da = new System.Data.OleDb.OleDbDataAdapter();
    DataTable dt = new DataTable();
    da.Fill(dt, Dts.Variables["User::varQuery"].Value);
    Dts.Variables["QueryStr2"].Value = dt.Rows[0].ItemArray[0];
    Dts.TaskResult = (int)ScriptResults.Success;
    4. Add another Data Folw Task under the Script Task, and join them. In the Data Flow Task, add an OLE DB Source, set its Data access mode to “SQL command from variable”, and select the variable User::QueryStr.
    Method 2: via Foreach Loop Container
    Add a Foreach Loop Container under the Data Flow Task, and join them.
    Set the enumerator of the Foreach Loop Container to Foreach ADO Enumerator, and select the ADO object source variable as User::varQuery.
    In the Variable Mappings tab, map the collection value of the Script Task to User::QueryStr, and Index to 0.
    Inside the Foreach Loop Container, add a Data Flow Task like step 4 in method 1.
    Regards,
    Mike Yin
    TechNet Community Support

  • SSIS Data Flow task using SharePoint List Adapter Setting SiteUrl won't work with an expression

    Hi,
    I'm trying to populate the SiteUrl from a variable that has been set based on a query to a SQL table that has a URL field.  Here are the steps I've taken and the result.
    Created a table with a url in it to reference a SharePoint Task List.
    Created a Execute SQL Task to grab the url putting the result set in a variable called SharePointUrl
    Created a For Each container and within the collection I use the SharePointUrl as the ADO object source variable and select rows in the first table.
    Still in the For Each container within the Variable mappings I have another Package Variable called PassSiteUrl2 and I set that to Index 0 or the value of the result set.
    I created a script task to then display the PassSiteUrl2 variable and it works great I see my url
    This is where it starts to suck eggs!!!!
    I insert a Data Flow Task into my foreach loop.
    I Insert a SharePoint List Adapter into my Data Flow
    Within my SharePoint List Adapter I set my list to be "Tasks", My list view to be "All Tasks" and then I set the url to be another SharePoint site that has a task list just so there is some default value to start with.
    Now within my Data Flow I create an expression and set the [SharePoint List Source].[SiteUrl] equal to my variable @[User::PassSiteUrl2].
    I save everything and run my SSIS package and it overlays the default [SharePoint List Source].[SiteUrl] with blanks in the SharePoint List Adapter then throws the error that its missing a url
    So here is my question.  Why if my package variable displays fine in my Control Flow is it now not seen or seen as blanks in the Data Flow Expression.  Anyone have any ideas???
    Thanks
    Donald R. Landry

    Thanks Arthur,
    The scope of the variable is at a package level and when I check to see if it can be moved Package level is the highest level.  The evaluateasexpression property is set to True.  Any other ideas?
    I also tried to do the following.  Take the variable that has the URL in it and just assign it to the description of the data flow task to see if it would show up there (the idea being the value of my @[User::PassSiteUrl] should just show in the
    description field when the package is run. That also shows up blank. 
    So i'm thinking its my expression.  All I do in the expression is set [SharePoint List Source].[SiteUrl] equal to @[User::PassSiteUrl] by dragging and dropping the variable into the expression box.  Maybe the expression should be something
    else or is their a way to say  @[User::PassSiteUrl] = Dts.Variables("User::PassSiteUrl2").Value.ToString() 
    In my script task I use Dts.Variables("User::PassSiteUrl2").Value.ToString() to display
    the value in the message box and that works fine.
    Donald R. Landry

  • BCS - Data collection issue

    Hi,
    I'm using BCS 4.0. I'm working now in final testing and I have some question regarding to the data collection process using load from data stream method. I ran the task in consolidation monitor for 007.2007 period and for all companies without any error or warnings, but we have differences in financial information for that period.
    I reviewed the content in my BCS cube (RSA1) and we don't have any data for that accounts, the only thing that I found was that all docs were created on same date
    I deleted the ID request in RSA1in my BCS cube and I executed task again in consolidation monitor, but the result was the same.
    Looking at log / source data, in the rows listed, these data is not taking from the infoprovider
    Any idea which could be the problem ?
    Thanks
    Nayeli

    Hi Nayeli,
    I had to do this kind of job (reconciliation of data in the source basis cube and the totals cube) during the final testing a lot of times, with an accountant.
    The only way to have the first clue what is happening is to compare every particular amount s in both cubes. Comparing and trying to figure out any dependencies in data.
    The difference might arise because of ANY reason. Only you may analyze the data and decide what to do.
    AFAIU, you compared only reported data and did no currency translation and eliminations?
    Then, I'm afraid that you've made a very big mistake deleting the request from the totals cube. You have broken the consistency between totals and document data. For a transactional cube the request is yellow until the number of records in it reach 50000 records. Then it is closed and become green. As you may understand, in those 50000 records maybe contained a lot of data for different posting periods, for example, reported data for the previous closed period. Though, documents still contain eliminations for that period. - Inconsistency.

  • Report Manager - Data collection

    Hi,
    I'm having problems with data collection on a number of servers.
    Essentially, I've configured data logging on a number of servers
    by creating a job in the 'Manage jobs' window, using the task type
    'data logging' and running it against a number of hosts.
    Unfortunately, when I look at the 'data collection' window via report
    manager, a number of host entries have an excamation mark attached
    which implies that data has not been collected. If I look at the attribute editor for one of the logged entries on an affected server, the history tab
    clearly states that the 'Save History as Disk_File' is ticked.
    Any ideas?

    I have also seen this issue. Some of these issues were fixed by checking
    DNS. For data collection to work both forward and reverse DNS must work
    for the agent. If there is a mismatch or missing DNS it doesn't work.
    If there is a mismatch in DNS sometimes a re-discovery is required.
    Hi,
    I'm having problems with data collection on a
    number of servers.
    Essentially, I've configured data logging on a number
    of servers
    by creating a job in the 'Manage jobs' window, using
    the task type
    'data logging' and running it against a number of
    hosts.
    Unfortunately, when I look at the 'data
    collection' window via report
    manager, a number of host entries have an excamation
    mark attached
    which implies that data has not been collected. If I
    look at the attribute editor for one of the logged
    entries on an affected server, the history tab
    clearly states that the 'Save History as Disk_File'
    is ticked.
    Any ideas?

  • Data Collection

    Hi Experts,
    I am facing issue in Data collection Error in Decentralized environment.
    Earlier Data collection (Standard collection) was working normal but recently I am facing the below issue.
    If I launch Standard collection, it is getting Error out. Log file message (in Data pull Program) is Staging Tables' data is not collected. Please submit the request Planning ODS Load.
    For resolving the above, I tried the below two methods.
    1. 1st launch Planning data collection - purging staging tables then Launching Standard collection, but it is also fails with error in the program "APS Collections Miscellaneous Task". Error message : FDPSTP failed due to ORA-06512: at "APPS.MSC_UTIL", line 1192.
    2. launch Planning data collection - purging staging table, Data pull (manual launch) then Planning ODS load (manual launch). Same error message
    But, I am getting negative result.
    Hence, what is the missing setup. Is this need any profile option. or How to solve this.
    Appreciating your earlier answer.
    Regards,
    Ramesh

    Hi
    Package is Valid (I used the said query "select object_name, object_type, status from all_objects where object_name = 'MSC_UTIL'")
    Data collection issue is not resolving. Below are my tryings
    Try 1:
    1. I launched "Planning Data Collection - Purge Staging Tables" with validation = No (Later I tried with Validation = Yes also) then
    2. I launched Standard Collection (as standard way)
    Result:
    Planning ODS Load getting Error out, log file showing below error
    Exceptions posted by this request
    Concurrent Request for "Planning ODS Load" has completed with Error.
    Try 2:
    1. I launched "Planning Data Collection - Purge Staging Tables" with validation = No (Later I tried with Validation = Yes also) then
    2. I launched "Planning Data Pull" manually (Completed Normal) then
    3. I launched "Planning ODS Load" manually
    Result:
    Planning ODS Load Error out because of below
    I am getting below error message in the concurrent program "Generate Trading Partner Keys" log file.
    **Starts**27-FEB-2011 16:20:16
    ORACLE error 6512 in FDPSTP
    Cause: FDPSTP failed due to ORA-06512: at "APPS.MSC_CL_SETUP_ODS_LOAD", line 5094
    ORA-06512: at "APPS.MSC_CL_COLLECTION", line 6862
    ORA-06512: at line 1
    Appreciating your earlier reply
    Regards,
    Ramesh
    .

  • Data Collection Jobs fail to run

    Hello,
    I have installed three servers with SQL Server 2012 Standard Edition, which should be identically installed and configured.
    Part of the installation was setting up Data Collection which worked with out problems on two of the servers, but not on the last where 5 of the SQL Server Agent Jobs fails.
    The jobs failing are:
    collection_set_1_noncached_collect_and_upload
    Message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    When I try to execute one of the jobs, I get the following event in the System Log:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Application Popup"
    Guid="{47BFA2B7-BD54-4FAC-B70B-29021084CA8F}" />
      <EventID>26</EventID>
      <Version>0</Version>
      <Level>4</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000000</Keywords>
      <TimeCreated
    SystemTime="2013-06-04T11:23:10.924129800Z" />
      <EventRecordID>29716</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="396" ThreadID="1336" />
      <Channel>System</Channel>
      <Computer>myServer</Computer>
      <Security
    UserID="S-1-5-18" />
      </System>
    - <EventData>
      <Data Name="Caption">DCEXEC.EXE - System Error</Data>
      <Data Name="Message">The program can't start because SqlTDiagN.dll
    is missing from your computer. Try reinstalling the program to fix this
    problem.</Data>
      </EventData>
     </Event>
    I have tried removing and reconfiguring the Data Collection two times using the stored procedure msdb.dbo.sp_syscollector_cleanup_collector and removing the underlying database.
    Both times with the same result.
    Below are the basic information about the setup of the server:
    Does any one have any suggestions about what the problem could be and what the cure might be?
    Best regards,
    Michael Andy Poulsen

    I tried running a repair on the SQL Server installation.
    This solved the problem.
    Only thing is that now I have to figure out the mening of these last lines in the log of the repair:
    "The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don’t require
    further customization for your applications:
    Service SID support has been enabled on the service.
    Service SID support has been enabled on the service."
    /Michael Andy Poulsen

  • Data acquisition: task specified does not exist

    I am trying to do an analog read.   Apparently in the original code (which I did not write), the data acquisiton is a multithreaded task. (I am not a real programmer, so know next to nothing about multithreading, and not a whole lot about programming)...
    When I tried to collect data (by pressing our "Data Collect" button on the GUI, I got an error along the lines of:
    "Task specified is invalid or does not exist. Status Code: -200088"
    I have figured out that the task the error refers to is called SetCollectData, for which I am not able to find a definition.
    It runs fine when I comment that line out, but of course, no data gets collected!
    Here is the actual data acquisition portion of the code:
    if(DAQInputTask && ) GetCollectData())
                    if((AiData = malloc(sampsPerChan*AiNumChannels*sizeof(float64))) == NULL )
                        MessagePopup("Error","Not enough memory");
                        goto Error;
                    DAQmxErrChk(DAQmxReadAnalogF64(DAQInputTask,-1,0,DAQmx_Val_GroupByScanNumber,AiData,sampsPerChan*AiNumChannels,&numRead,NULL));
                    fwrite (AiData, sizeof(float64), sampsPerChan*AiNumChannels, AiRawData);
           *AiData = NULL;
    Any ideas?
    I have switched to NIDAQmx drivers, and have had a host of problems since switching to DAQmx functions.  So, this new problem may still be related. 
    Thanks!

    Tasks in CVI are the basic of data acquisition: every operation is run under a task which groups channel definition, triggers, acquisition rate and almost every other element which constitues an acquisition process.
    You should try to find informations about the tasks used in your project, possibly asking the original programmer; on the same time, you should learn some infos about how daqmx treats data acquisition process. Here you can find the help page related to tasks: I suggest you read that page and use it as a starting point for your search. Keep in mind that a task can be defined either inside the program or within MAX. One possible cause for not finding the task while running the program is that is has been defined in MAX on one machine and you have moved the project to another one without moving MAX informations too from the original PC, but this is only a guess.
    It's not clear from your words whether you are modifying an existing project developed in DAQMx from the beginning or you are moving an old NI-DAQ project to DAQMx. These are very different scenarios. In every case you should see the exact line where the error arises, look for the task handle used in that line (e.g. 'DAQInputTask' in the code you posted) and find the line where the task is loaded/created (ultimately try a search for that variable name in all your source files and examine each line where it si used).
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • BI data collection question

    Dear colleagues,
    I have one big question regarding the BI data collection functionality (within the solution reporting tab of T-cd: DSWP).
    To whom it may concerned,
    According to the SAP tutor located at the learning map,
    it looks like SolMan system requires XI installed in order to use the BI data collection. Is this correct? If so, are there any step by step configuration guide that covers from installing XI to enabling the BI data collection function?
    Thank you guys in advance.
    rgds,
    Yoshi

    Hi,
    But we want to maintain Item Master in ITEM02, Item Branch in ITEM03  data separate infoobjects because the discriptions always changing that is the reason I created different  Master Data infoobject.
    Let me give one sample data
    Record1: IT001 LOT1 BRAN1 CODE1 CODE2 DS01 DS02 in  2006
    Record2: IT001 LOT1 BRAN1 CODE1 CODE2 DS05 DS04 in  2007
    If you look at above data in 2006 item description was DS01,DS02 and in 2007 Description changed from DS01, DS02 to DS05,DS04 with the same item no.
    We want to maintain item descriptions will be the same in 2006 and 2007 if we change item descriptions in 2007. How can it be possible?
    If I go your way how can we achive this task?
    Do we need Dalta loads?
    Also do i need to create InfoCube? Can I use ITEMNO infoobject as a Data target ?
    Please let me know your thoughts.
    Regards,
    ALEX

Maybe you are looking for