Could Not Compile File System Watcher For Mac
Using the File System Events API The File System Events API consists of several distinct groups of functions. You can obtain general information about volumes and events by using functions that begin with FSEvents. You can create a new event stream, perform operations on the stream, and so on using functions that begin with FSEventStream.
The life cycle of a file system events stream is as follows:. The application creates a stream by calling. The application schedules the stream on the run loop by calling. The application tells the file system events daemon to start sending events by calling.
The application services events as they arrive. The API posts events by calling the callback function specified in step 1. The application tells the daemon to stop sending events by calling. If the application needs to restart the stream, go to step 3. The application unschedules the event from its run loop by calling. The application invalidates the stream by calling. The application releases its reference to the stream by calling.
These steps are explained in more detail in the sections that follow. Adding Include Directives Before you use the file system event stream API, you must include the Core Services framework as follows. #include When you compile, you must include the Core Services Framework by adding it to your target in Xcode or by adding the flag -framework CoreServices to your linker flags on the command line or in a Makefile. Creating an Event Stream The file system events API supports two types of event streams: per-disk event streams and a per-host event streams. Before you can create a stream, you must decide which type of stream to create: a per-host event stream or a per-disk event stream. You can create these streams by calling the functions and, respectively. A per-host event stream consists of events whose IDs are increasing with respect to other events on that host.
These IDs are guaranteed to be unique with one exception: if additional disks are added from another computer that was also running OS X v10.5 or later, historical IDs may conflict between these volumes. Any new events will automatically start after the highest-numbered historical ID for any attached drive. A per-disk event stream, by contrast, consists of events whose IDs are increasing with respect to previous events on that disk. It does not have any relationship with other events on other disks, and thus you must create a separate event stream for each physical device that you wish to monitor. In general, if you are writing software that requires persistence, you should use per-disk streams to avoid any confusion due to ID conflicts. By contrast, per-host streams are most convenient if you are monitoring for changes in a directory or tree of directories during normal execution, such as watching a queue directory. Note: Because disks can be modified by computers running earlier versions of OS X (or potentially other operating systems), you should treat the events list as advisory rather than a definitive list of all changes to the volume.
If a disk is modified by a computer running a previous version of OS X, the historical log is discarded. For example, backup software should still periodically perform a full sweep of any volume to ensure that no changes fall through the cracks. If you are monitoring files on the root file system, either stream mechanism will behave similarly.
For example, the following snippet shows how to create an event stream. /. Define variables and create a CFArray object containing CFString objects containing paths to watch./ CFStringRef mypath = CFSTR('/path/to/scan'); CFArrayRef pathsToWatch = CFArrayCreate(NULL, (const void.)&mypath, 1, NULL); void.callbackInfo = NULL; // could put stream-specific data here.
FSEventStreamRef stream; CFAbsoluteTime latency = 3.0; /. Latency in seconds./ /. Create the stream, passing in a callback./ stream = FSEventStreamCreate(NULL, &myCallbackFunction, callbackInfo, pathsToWatch, kFSEventStreamEventIdSinceNow, /.
Or a previous event ID./ latency, kFSEventStreamCreateFlagNone /. Flags explained in reference./ ); Once you have created an event stream, you must schedule it on your application’s run loop. To do this, call FSEventStreamScheduleWithRunLoop, passing in the newly-created stream, a reference to your run loop, and a run loop mode. For more information about run loops, read.
If you don’t already have a run loop, you will need to devote a thread to this task. After creating a thread using your API of choice, call to allocate an initial run loop for that thread. Any future calls to will return the same run loop.
For example, the following snippet shows how to schedule a stream, called stream, on the current thread’s run loop (not yet running). FSEventStreamRef stream; /.
Create the stream before calling this./ FSEventStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent, kCFRunLoopDefaultMode); The final step in setting up an event stream is to call FSEventStreamStart. This function tells the event stream to begin sending events. Its sole parameter is the event stream to start.
Once the event stream has been created and scheduled, if your run loop is not already running, you should start it by calling. Handling Events Your event handler callback must conform to the prototype for. The parameters are described in the reference documentation for the FSEventStreamCallback data type. Your event handler receives three lists: a list of paths, a list of identifiers, and a list of flags. In effect, these represent a list of events. The first event consists of the first entry taken from each of the arrays, and so on. Your handler must iterate through these lists, processing the events as needed.
For each event, you should scan the directory at the specified path, processing its contents as desired. Normally, you need to scan only the exact directory specified by the path. However, there are three situations in which this is not the case:. If an event in a directory occurs at about the same time as one or more events in a subdirectory of that directory, the events may be coalesced into a single event. In this case, you will receive an event with the flag set. When you receive such an event, you must recursively rescan the path listed in the event.
The additional changes are not necessarily in an immediate child of the listed path. If a communication error occurs between the kernel and the user-space daemon, you may receive an event with either the or flag set. In either case, you must do a full scan of any directories that you are monitoring because there is no way to determine what may have changed.
Note: When an event is dropped, the flag is also set. Thus, it is not necessary to explicitly check for the dropped event flags when determining whether to perform a full rescan of a path. The dropped event flags are provided purely for informational purposes. If the root directory that you are watching is deleted, moved, or renamed (or if any of its parent directories are moved or renamed), the directory may cease to exist. If you care about this, you should pass the flag when creating the stream. In this case, you will receive an event with the flag and an event ID of zero ( 0).
In this case, you must rescan the entire directory because it may not exist. If you need to figure out where the directory moved, you should open the root directory with open, then pass FGETPATH to to find its current path. See the manual page for for more information.
If the number of events approaches 2^64, the event identifier will wrap around. When this happens, you will receive an event with the flag.
Fortunately, at least in the near term, this is unlikely to occur in practice, as 64 bits allows enough room for about one event per eraser-sized region on the Earth’s surface (including water) and would require about 2000 exabytes (2 million million gigabytes) of storage to hold them all. However, you should still check for this flag and take appropriate action if you receive it. As part of your handler, you may sometimes need to obtain a list of paths being watched by the current event stream.
You can obtain that list by calling. Sometimes, you may wish to monitor where you are in the stream.
You might, for example, choose to do less processing if your code is slipping significantly behind. You can find out the latest event included in the current batch of events by calling (or by examining the last event in the list). You can then compare this with the value returned by, which returns the highest numbered event in the system. For example, the following code snippet shows a very simple handler. Note: If you passed the flag when creating the stream, you should cast the eventPaths value to a object.
Using Persistent Events One of the most powerful features of file system events is their persistence across reboots. This means that your application can easily find out what happened since a particular time or a particular event in the distant past. By doing so, you can find out what files have been modified even when your application is not running. This can greatly simplify tasks such as backing up modified files, checking for changed dependencies in multi-file projects, and so on. To work with persistent events, your application should regularly store the last event ID that it processes. Then, when it needs to go back and see what files have changed, it only needs to look at events that occurred after the last known event.
To obtain all events since a particular event in the past, you pass the event ID in the sinceWhen argument to or. On a per-device basis, you can also easily use a timestamp to determine which events to include. To do this, you must first call to obtain the last event ID sinceWhen argument to. On a per-device basis, you can also easily use a time stamp to determine which events to include. To do this, you must first call to obtain the last event ID for that device prior to the specified time stamp. You then pass the resulting value to. This is described further in.
When working with persistent events, a commonly-used technique is to combine file system event notifications with a cached “snapshot” of the metadata of files within the tree. This process is described further in.
Building a Directory Hierarchy Snapshot File system events tell you that something in a given directory changed. In some cases, this is sufficient—for example, if your application is a print or mail spooler, all it needs to know is that a file has been added to the directory. In some cases, however, this is not enough, and you need to know precisely what changed within the directory.
The simplest way to solve this problem is to take a snapshot directory hierarchy, storing your own copy of the state of the system at a given point in time. You might, for example, store a list of filenames and last modified dates, thus allowing you to determine which files have been modified since the last time you performed a backup. You do this by iterating through the hierarchy and building up a data structure of your choice. As you cache this metadata, if you see changes during the caching process, you can reread the directory or directories that changed to obtain an updated snapshot. Once you have a cached tree of metadata that accurately reflects the current state of the hierarchy you are concerned with, you can then determine what file or files changed within a directory or hierarchy (after a file system event notification) by comparing the current directory state with your snapshot.
Important: To avoid missing changes, you must start monitoring the directory before you start scanning it. Because of the inherently non-deterministic latency in any notification mechanism on a multitasking operating system, it may not always be obvious whether the action that triggered an event occurred before or after a nested subdirectory was scanned. To guarantee that no changes are lost, it is best to always rescan any subdirectory that is modified during scanning rather than taking a time stamp for each subdirectory and trying to compare those time stamps with event time stamps. OS X provides a number of APIs that can make this easier. The function returns an array of directory entries that you can quickly iterate through.
This is somewhat easier than reading a directory manually with, and so on, and is slightly more efficient since you will always iterate through the entire directory while caching anyway. The binary tree functions, and can simplify working with large search trees. In particular, binary trees are an easy way of quickly finding the cached file information from a particular directory. The following code snippet demonstrates the proper way to call these functions: Listing 2-1 Using the tsearch, tfind, twalk, and tdelete API. Note: Despite the seeming consistency, the function does not take the address of the root, so the ampersand is not needed, and indeed, will cause a crash if you use it. The values passed to the callback by and the values returned by and are the address where the pointer to the data is stored, not the data value itself.
Because this code passed in the address of an integer, it is necessary to dereference that value twice—once for the original address-of operator and once to dereference the pointer to that pointer that these functions return. Unlike the other functions, however, the function does not return an address within the tree where the data is stored. This is because the data is no longer stored in the tree.
Instead, it returns the parent node of the node that it deleted. The POSIX functions and provide easy access to file metadata.
These two functions differ in their treatment of symbolic links. The lstat function provides information about the link itself, while the stat function provides information about the file that the link points to.
Generally speaking, when working with file system event notifications, you will probably want to use lstat, because changes to the underlying file will not result in a change notification for the directory containing the symbolic link to that file. However, if you are working with a controlled file structure in which symbolic links always point within your watched tree, you might have reason to use stat. For an example of a tool that builds a directory snapshot, see the sample code. Cleaning Up When you no longer need a file system event stream, you should always clean up the stream to avoid leaking memory and descriptors.
Before cleaning up, however, you must first stop the run loop by calling. Next, you should call. This function unschedules the stream from all run loops with a single call. If you need to unschedule it from only a single run loop, or if you need to move the event stream between two run loops, you should instead call. You can then reschedule the event stream, if desired, by calling.
Once you have invalidated the event stream, you can release it by calling. When the stream release and stream retain counts balance and there are no longer any occurances of the stream being retained, the stream will be freed. There are three other cleanup-related functions that you should be aware of under certain circumstances. If your application needs to make certain that the file system has reached a steady state prior to cleaning up the stream, you may find it useful to flush the stream. You can do this with one of two functions:.
When flushing events, the synchronous call will not return until all pending events are flushed. The asynchronous call will return immediately, and will return the event ID (of type ) of the last event pending. You can then use this value in your callback function to determine when the last event has been processed, if desired. The final function related to cleaning up is. This function can only be called by the root user because it destroys the historical record of events on a volume prior to a given event ID.
As a general rule, you should never call this function because you cannot safely assume that your application is the only consumer of event data. If you are writing a specialized application (an enterprise backup solution, for example), it may be appropriate to call this function to trim the event record to some reasonable size to prevent it from growing arbitrarily large. You should do this only if the administrator explicitly requests this behavior, however, and you should always ask for confirmation (either before performing the operation or before enabling any rule that would cause it to be performed at a later time). Special Considerations for Per-Device Streams In addition to the considerations described in, streams created with, per-device streams have some special characteristics that you should be aware of:.
All paths are relative to the root of the volume that you are monitoring, not relative to the system root. This applies to both the path used when creating the stream and to any path that your callback receives as part of an event.
Device IDs may not remain the same across reboots (particularly with removable devices). It is your responsibility to ensure that the volume you are looking at is the right one by comparing the UUID. In addition to the functions provided for systemwide streams, you can obtain the UUID for the device associated with a stream by calling. You can obtain the unique ID for a device by calling. If this unique ID is different than the one obtained from a previous run, this can mean many things. It could mean that the user has two volumes with the same name, that the user has reformatted the volume with the same name, or that the event IDs have been purged for the volume.
In any of these cases, any previous events for the volume do not apply to this particular volume, but they may still be valid for another volume. If you find that the UUID for a volume matches what was stored on a previous run, but the event ID is lower than the last version you stored, this may mean that the user restored a volume from a backup, or it may mean that the IDs have wrapped around or have been purged. In either case, any stored events you may have for the device are invalid. Finally, if you are using persistent events, you can also use the function to find the last event prior to a time stamp. This event ID is persistent, and can be particularly useful for performing incremental backups.
The time format used is a value, which is measured in seconds since January 1, 2001. For other timestamp formats, you must convert them to this format as follows:. If you are writing a Cocoa application, you should use an object to perform any conversions, then use to obtain the corresponding CFAbsoluteTime value. (You can transparently pass an object as a.). If you are starting with a POSIX timestamp in a non-Cocoa application, you should subtract from the value to convert to a CFAbsoluteTime value. Be sure to always use timestamps based on GMT. If you are working with a legacy Carbon timestamp in a non-Cocoa application, you would subtract.
Be sure to always use timestamps based on GMT. For more information about date and time types, you should read.
Using File Watchers This feature is only supported in the Ultimate edition. File Watcher is a built-in IntelliJ IDEA tool that allows you to automatically run a command-line tool like compilers, formatters, or linters when you change or save a file in the IDE. You can configure a File Watcher from scratch or use one of the available templates. Configured File Watcher can be saved in your project settings or in the IDE settings and used in different projects. Before you start, install the File Watchers plugin on the page as described in.
Creating a File Watcher Create a File Watcher. In the Settings/Preferences dialog ( Ctrl+Alt+S), click File Watchers under Tools.
The opens showing a list of File Watchers that are already configured in this project and in the IDE. Click and choose the predefined template from which you want to create a File Watcher. The choice depends on the tool you are going to use.
To use a tool that is not on the list, choose Custom. The dialog box opens. In the Name text box, type the name of the File Watcher. By default, IntelliJ IDEA suggests the name of the selected predefined template. Configuring the expected type and location of input files By default, the field shows the file type in accordance with the chosen predefined template. See for details.
Could Not Compile File System Watcher For Mac Os
This option is available only for Babel, Closure Compiler, Compass, Jade, Less, Sass/ SCSS, Stylus, UglifyJS, and YUI Compressor JS. Use the controls in the Files to watch area to define the range of files where you want to apply the File Watcher. From the File type drop-down list, choose the expected type of input files. The File Watcher will consider only files of this type as subject for analyzing and processing.
File types are recognised based on. Choose the Scope in which the File Watcher is applicable. Changes in these files will invoke the File Watcher either immediately or upon save or frame deactivation, depending on the status of the Auto-save edited files to trigger the watcher checkbox. Choose one of the predefined scopes from the drop-down list. For a project-level File Watcher, you can also click to configure a custom scope in the Scopes dialog that opens. All Places: all the scopes listed below. Project Files: all the files within the project content roots (see ).
Project Production Files: all the files within the project content roots excluding test sources. Project Test Files: all the files within the project test source roots. Scratches and Consoles: all the files from the Scratches and Consoles directory located in the Project tool window. Open Files: all the files that are currently opened in the editor.
Current File: the file opened in the active editor tab. VCS Scopes: these scopes are only available if your. Changed Files: all changed files, that is, all files associated with all existing changelists. Default Changelist: all the files associated with the changelist Default. Alternatively, click Browse and configure a custom scope in the Scopes dialog box that opens.
Optionally: specify how you want the File Watcher to deal with dependencies. A root file is a file that is not included (e.g. Via import) in any other file. To run the File Watcher only against root files, select the Track only root files checkbox. Clear the checkbox to run the File Watcher against the file from which it is invoked and against all the files in which this file is recursively included within the specified scope.
Note that the Scope setting overrides the Track only root files checkbox setting: if a dependency is outside the specified scope, the File Watcher is not applied to it. Configuring interaction with the external tool. To use the program file name instead of its full path, add the path to its folder to the system environment variable PATH.
On Windows, you can skip the com,.exe,.cmd or.bat extension. To use a.jar archive, specify the absolute path to it. Alternatively, to use a relative path, add its parent folder to the IntelliJ IDEA path variables on the Appearance & Behavior Path Variables page of IntelliJ IDEA settings ( Ctrl+Alt+S). When specifying the arguments, follow these rules:. Use spaces as separators. If an argument contains spaces, enclose them or the entire argument in double quotes: some' 'arg or 'some arg'. If an argument contains double quotes, use backslashes to escape them: -Dmy.prop= 'quotedvalue'.
If you leave the Working Directory field empty, IntelliJ IDEA uses the directory of the file where the File Watcher is invoked. In the Tool to run on changes area, specify the tool to use, the arguments to pass to it, the expected output file types, etc. In the Program text box, specify the path to the executable file of the tool (.exe,.cmd,.bat, or other depending on the specific tool). If you are configuring a global file watcher for a tool installed inside a project (e.g. ), you need to specify the path to it using a macro. To use one of the available macros in the path, press the Insert Macro button.
For example, with the path $ProjectFileDir$/nodemodules/.bin/prettier, the File Watcher will use prettier installed in the project’s nodemodule folder. In the Arguments text box, define the arguments to pass to the tool.
Arguments are usually specified using macros, for example, $FileName$ or $FileNameWithoutExtension$, that will be replaced with actual file names. Type the macros manually or click Insert Macro and select the relevant pattern fom the list in the Macros dialog that opens. In the Output paths to refresh text box, specify the files where the tool stores its output: the resulting source code, source maps, and dependencies. Based on these settings, IntelliJ IDEA recognizes the files generated through compilation. Please note, that changing the value in the Output paths to refresh text box does not make the tool store its output in another place.
If you still need to do that, specify the desired custom output location in the Arguments text box: type the output paths with colons as separators and use macros. Output paths are usually specified using macros. Type the path manually o click Insert Macro and select the relevant pattern from the list. Expand the Working Directory and Environment Variables hidden area. Define the environment variables.
For example, specify the PATH variable for the tools that are required for starting the tool you are configuring but are not referenced in the path to it. In most cases it is Node.js or ruby.exe. Such situation may result from custom manual installation instead of installation through the Node Package Manager (npm) or gem manager. In the Working Directory text box, specify the directory to which the tool will be applied. Because the tool is always invoked in the context of a file, the default working directory is the directory of the current file. The default working directory is specified in all predefined templates through a $FileDir$ macros. To specify a custom working directory, type the path to it in the text box, or click and choose the directory in the Select Path dialog box, or click Insert Macro and select the desired macro from the list in the dialog box.
Configuring advanced options In the Advanced Options area, customize the default behaviour of the File Watcher. Specify the events that will invoke the File Watcher:. To invoke the File Watcher as soon as any changes are made to the source code, select the Auto-save edited files to trigger the watcher checkbox. When the checkbox is cleared, the File Watcher starts upon save ( File Save All) or when you move the focus from IntelliJ IDEA (on frame deactivation). By default, the File Watcher wakes up on any saved change, including the updates received from you version control system when you, for example, check out a branch. To ignore such changes and invoke the File Watcher only you update your code in IntelliJ IDEA, clear the Trigger the watcher on external changes checkbox.
Specify whether you want the File Watcher to interact with the IntelliJ IDEA syntax parser:. When the Trigger watcher regardless of syntax errors checkbox is selected, the File Watcher start regardless of the syntactical correctness of a file. The File Watcher will start upon update, save, or frame deactivation, depending on the status of the Auto-save edited files to trigger the watcher checkbox. When the Trigger watcher regardless of syntax errors checkbox is cleared, the File Watcher ignores all triggers in files that are syntactically invalid and starts only in error-free files.
Use the Create output file from stdout checkbox to specify how you want to generate the output file. When the checkbox is selected, IntelliJ IDEA reads the native tool's output ( standard output stream (stdout)) and generates the resulting files from it. When the checkbox is cleared, the tool writes its output directly to the files specified in the Output paths to refresh field. In the Show console drop-down list, choose when you want the File Watcher to open the console. Always: with this option, the console always opens after the tool execution is completed. On error: with this option, the console opens after the tool execution only when the Exit code is different from 0.
Never: choose this option to suppress opening the console at all. Showing information reported by File Watcher in the editor If the tool configured in the File Watcher reports errors or warnings about your code with the links to the files and specific lines, you can see this information right in the editor:. In Preferences/Settings Editor Inspections, enable the inspection File Watcher Problems. In the Output Filters field of the dialog, describe the format of the output you want to match. Use the macros $FILEPATH$, $LINE$, and $MESSAGE$ for that.
You need to avoid using special symbols like braces and dot with. The text matched with the $MESSAGE$ macro will be shown in the editor. Example: The tool reports errors in the following format. Saving, enabling and disabling File Watchers Once you’ve saved the new File Watcher, you can decide if it’s going to be available only in the current project (select Project in the right column) or for all projects (select Global). To enable or disable a File Watcher, select or clear the checkbox next to it in Preferences/Settings Tools File Watchers dialog. To toggle the enable/disable status of a File Watcher, select/clear the checkbox next to it on the File Watchers page of the Settings dialog box. If an error occurs while a File Watcher is running, the File Watcher is automatically disabled.
To restore the status, enable the File Watcher manually. When a File Watcher is enabled, it starts automatically as soon as a file of the selected type and in the selected scope is changed or saved, see. Examples of customizing the behaviour of an external tool in a File Watcher The only way to influence a third-party tool is to pass arguments to it just as if you were working in the command line mode.
These arguments are specific to each tool. Below are two examples of customizing the default output location for the CoffeeScript compiler. Suppose, you have a project with the following folder structure: By default, the generated files will be stored in the folder where the original file is.
You can change this default location and have the generated files stored in the js folder. Moreover, you can have them stored in a flat list or arranged in the folder structure that repeats the original structure under the app node. To store all the generated files in the output js folder. In the Arguments text box, type.